Dad of two; Java and web developer.
112 stories
·
0 followers

"Moron" considered dangerous

2 Shares

In all of the foofaraw about Rex Tillerson calling Donald Trump a "fucking moron", no one seems to have picked up on the fact the Mr. Tillerson may have endangered his immortal soul.

In "The S-word and the F-word", 6/12/2004, I noted that the gospel quotes Jesus delivering a strongly-worded threat to people who call other people stupid. Thus Matthew 5:22:

Original: Ἐγὼ δὲ λέγω ὑμῖν ὅτι πᾶς ὁ ὀργιζόμενος τῷ ἀδελφῷ αὐτοῦ ἔνοχος ἔσται τῇ κρίσει: ὃς δ᾽ ἂν εἴπῃ τῷ ἀδελφῷ αὐτοῦ Ῥακά, ἔνοχος ἔσται τῷ συνεδρίῳ: ὃς δ᾽ ἂν εἴπῃ Μωρέ, ἔνοχος ἔσται εἰς τὴν γέενναν τοῦ πυρός.

Transliteration: Egô de legô humin hoti pas ho orgizomenos tôi adelphôi autou enochos estai têi krisei: hos d' an eipêi tôi adelphôi autou Rhaka, enochos estai tôi sunedriôi: hos d' an eipêi Môre, enochos estai eis tên geennan tou puros.

KJV: but I say unto you, That whosoever is angry with his brother without a cause shall be in danger of the judgment: and whosoever shall say to his brother, Raca, shall be in danger of the council: but whosoever shall say, Thou fool, shall be in danger of hell fire.

NASB: But I say to you that everyone who is angry with his brother shall be guilty before the court; and whoever says to his brother, ' You good-for-nothing,' shall be guilty before the supreme court; and whoever says, 'You fool,' shall be guilty enough to go into the fiery hell.

The Greek word translated "fool" in that verse is precisely μωρός, which is the etymon of "moron", as the OED explains:

Etymology: ancient Greek μωρόν, neuter of μωρός , (Attic) μῶρος foolish, stupid (further etymology uncertain: a connection with Sanskrit mūra foolish, stupid, is now generally rejected).

I would have filed this post under "theology of language", but our wildly excessive number of categories doesn't include that possibility.

 

Read the whole story
careyhimself
7 days ago
reply
Christchurch, New Zealand
Share this story
Delete

The whole web at maximum FPS: How WebRender gets rid of jank

1 Share

The Firefox Quantum release is getting close. It brings many performance improvements, including the super fast CSS engine that we brought over from Servo.

But there’s another big piece of Servo technology that’s not in Firefox Quantum quite yet, though it’s coming soon. That’s WebRender, which is being added to Firefox as part of the Quantum Render project.Drawing of a jet engine labeled with the different Project Quantum projects

WebRender is known for being extremely fast. But WebRender isn’t really about making rendering faster. It’s about making it smoother.

With WebRender, we want apps to run at a silky smooth 60 frames per second (FPS) or better no matter how big the display is or how much of the page is changing from frame to frame. And it works. Pages that chug along at 15 FPS in Chrome or today’s Firefox run at 60 FPS with WebRender.

So how does WebRender do that? It fundamentally changes the way the rendering engine works to make it more like a 3D game engine.

Let’s take a look at what this means. But first…

What does a renderer do?

In the article on Stylo, I talked about how the browser goes from HTML and CSS to pixels on the screen, and how most browsers do this in five steps.

We can split these five steps into two halves. The first half basically builds up a plan. To make this plan, it combines the HTML and CSS with information like the viewport size to figure out exactly what each element should look like—its width, height, color, etc. The end result is something called a frame tree or a render tree.

The second half—painting and compositing—is what a renderer does. It takes that plan and turns it into pixels to display on the screen.

Diagram dividing the 5 stages of rendering into two groups, with a frame tree being passed from part 1 to part 2

But the browser doesn’t just have to do this once for a web page. It has to do it over and over again for the same web page. Any time something changes on this page—for example, a div is toggled open—the browser has to go through a lot of these steps.

Diagram showing the steps that get redone on a click: style, layout, paint, and composite

Even in cases where nothing’s really changing on the page—for example where you’re scrolling or where you are highlighting some text on the page—the browser still has to go through at least some of the second part again to draw new pixels on the screen.

Diagram showing the steps that get redone on scroll: composite

If you want things like scrolling or animation to look smooth, they need to be going at 60 frames per second.

You may have heard this phrase—frames per second (FPS)—before, without being sure what it meant. I think of this like a flip book. It’s like a book of drawings that are static, but you can use your thumb to flip through so that it looks like the pages are animated.

In order for the animation in this flip book to look smooth, you need to have 60 pages for every second in the animation.

Picture of a flipbook with a smooth animation next to it

The pages in this flip book are made out of graph paper. There are lots and lots of little squares, and each of the squares can only contain one color.

The job of the renderer is to fill in the boxes in this graph paper. Once all of the boxes in the graph paper are filled in, it is finished rendering the frame.

Now, of course there is not actual graph paper inside of your computer. Instead, there’s a section of memory in the computer called a frame buffer. Each memory address in the frame buffer is like a box in the graph paper… it corresponds to a pixel on the screen. The browser will fill in each slot with the numbers that represent the color in RGBA (red, green, blue, and alpha) values.

A stack of memory addresses with RGBA values that are correlated to squares in a grid (pixels)

When the display needs to refresh itself, it will look at this section of memory.

Most computer displays will refresh 60 times per second. This is why browsers try to render pages at 60 frames per second. That means the browser has 16.67 milliseconds to do all of the setup —CSS styling, layout, painting—and fill in all of the slots in the frame buffer with pixel colors. This time frame between two frames (16.67 ms) is called the frame budget.

Sometimes you hear people talk about dropped frames. A dropped frame is when the system doesn’t finish its work within the frame budget. The display tries to get the new frame from the frame buffer before the browser is done filling it in. In this case, the display shows the old version of the frame again.

A dropped frame is kind of like if you tore a page out of that flip book. It would make the animation seem to stutter or jump because you’re missing the transition between the previous page and the next.

Picture of a flipbook missing a page with a janky animation next to it

So we want to make sure that we get all of these pixels into the frame buffer before the display checks it again. Let’s look at how browsers have historically done this, and how that has changed over time. Then we can see how we can make this faster.

A brief history of painting and compositing

Note: Painting and compositing is where browser rendering engines are the most different from each other. Single-platform browsers (Edge and Safari) work a bit differently than multi-platform browsers (Firefox and Chrome) do.

Even in the earliest browsers, there were some optimizations to make pages render faster. For example, if you were scrolling content, the browser would keep the part that was still visible and move it. Then it would paint new pixels in the blank spot.

This process of figuring out what has changed and then only updating the changed elements or pixels is called invalidation

As time went on, browsers started applying more invalidation techniques, like rectangle invalidation. With rectangle invalidation, you figure out the smallest rectangle around each part of the screen that changed. Then, you only redraw what’s inside those rectangles.

This really reduces the amount of work that you need to do when there’s not much changing on the page… for example, when you have a single blinking cursor.

Blinking cursor with small repaint rectangle around it

But that doesn’t help much when large parts of the page are changing. So the browsers came up with new techniques to handle those cases.

Introducing layers and compositing

Using layers can help a lot when large parts of the page are changing… at least, in certain cases.

The layers in browsers are a lot like layers in Photoshop, or the onion skin layers that were used in hand-drawn animation. Basically, you paint different elements of the page on different layers. Then you then place those layers on top of each other.

They have been a part of the browser for a long time, but they weren’t always used to speed things up. At first, they were just used to make sure pages rendered correctly. They corresponded to something called stacking contexts.

For example, if you had a translucent element, it would be in its own stacking context. That meant it got its own layer so you could blend its color with the color below it. These layers were thrown out as soon as the frame was done. On the next frame, all the layers would be repainted again.

Layers for opacity generated, then frame rendered, then thrown out

But often the things on these layers didn’t change from frame to frame. For example, think of a traditional animation. The background doesn’t change, even if the characters in the foreground do. It’s a lot more efficient to keep that background layer around and just reuse it.

So that’s what browsers did. They retained the layers. Then the browser could just repaint layers that had changed. And in some cases, layers weren’t even changing. They just needed to be rearranged—for example, if an animation was moving across the screen, or something was being scrolled.

Two layers moving relative to each other as a scroll box is scrolled

This process of arranging layers together is called compositing. The compositor starts with:

  • source bitmaps: the background (including a blank box where the scrollable content should be) and the scrollable content itself
  • a destination bitmap, which is what gets displayed on the screen

First, the compositor would copy the background to the destination bitmap.

Then it would figure out what part of the scrollable content should be showing. It would copy that part over to the destination bitmap.

Source bitmaps on the left, destination bitmap on the right

This reduced the amount of painting that the main thread had to do. But it still means that the main thread is spending a lot of time on compositing. And there are lots of things competing for time on the main thread.

I’ve talked about this before, but the main thread is kind of like a full-stack developer. It’s in charge of the DOM, layout, and JavaScript. And it also was in charge of painting and compositing.

Main thread doing DOM, JS, and layout, plus paint and composite

Every millisecond the main thread spends doing paint and composite is time it can’t spend on JavaScript or layout.

CPU working on painting and thinking "I really should get to that JS soon"

But there was another part of the hardware that was lying around without much work to do. And this hardware was specifically built for graphics. That was the GPU, which games have been using since the late 90s to render frames quickly. And GPUs have been getting bigger and more powerful ever since then.

A drawing of a computer chip with 4 CPU cores and a GPU

GPU accelerated compositing

So browser developers started moving things over to the GPU.

There are two tasks that could potentially move over to the GPU:

  1. Painting the layers
  2. Compositing them together

It can be hard to move painting to the GPU. So for the most part, multi-platform browsers kept painting on the CPU.

But compositing was something that the GPU could do very quickly, and it was easy to move over to the GPU.

Main thread passing layers to GPU

Some browsers took this parallelism even further and added a compositor thread on the CPU. It became a manager for the compositing work that was happening on the GPU. This meant that if the main thread was doing something (like running JavaScript), the compositor thread could still handle things for the user, like scrolling content up when the user scrolled.

Compositor thread sitting between main thread and GPU, passing layers to GPU

So this moves all of the compositing work off of the main thread. It still leaves a lot of work on the main thread, though. Whenever we need to repaint a layer, the main thread needs to do it, and then transfer that layer over to the GPU.

Some browsers moved painting off to another thread (and we’re working on that in Firefox today). But it’s even faster to move this last little bit of work — painting — to the GPU.

GPU accelerated painting

So browsers started moving painting to the GPU, too.

Paint and composite handled by the GPU

Browsers are still in the process of making this shift. Some browsers paint on the GPU all of the time, while others only do it on certain platforms (like only on Windows, or only on mobile devices).

Painting on the GPU does a few things. It frees up the CPU to spend all of its time doing things like JavaScript and layout. Plus, GPUs are much faster at drawing pixels than CPUs are, so it speeds painting up. It also means less data needs to be copied from the CPU to the GPU.

But maintaining this division between paint and composite still has some costs, even when they are both on the GPU. This division also limits the kinds of optimizations that you can use to make the GPU do its work faster.

This is where WebRender comes in. It fundamentally changes the way we render, removing the distinction between paint and composite. This gives us a way to tailor the performance of our renderer to give you the best user experience on today’s web, and to best support the use cases that you will see on tomorrow’s web.

This means we don’t just want to make frames render faster… we want to make them render more consistently and without jank. And even when there are lots of pixels to draw, like on 4k displays or WebVR headsets, we still want the experience to be just as smooth.

When do current browsers get janky?

The optimizations above have helped pages render faster in certain cases. When not much is changing on a page—for example, when there’s just a single blinking cursor—the browser will do the least amount of work possible.

Blinking cursor with small repaint rectangle around it

Breaking up pages into layers has expanded the number of those best-case scenarios. If you can paint a few layers and then just move them around relative to each other, then the painting+compositing architecture works well.

Rotating clock hand as a layer on top of another layer

But there are also trade offs to using layers. They take up a lot of memory and can actually make things slower. Browsers need to combine layers where it makes sense… but it’s hard to tell where it makes sense.

This means that if there are a lot of different things moving on the page, you can end up with too many layers. These layers fill up memory and take too long to transfer to the compositor.

Many layers on top of each other

Other times, you’ll end up with one layer when you should have multiple layers. That single layer will be continually repainted and transferred to the compositor, which then composites it without changing anything.

This means you’ve doubled the amount of drawing you have to do, touching each pixel twice without getting any benefit. It would have been faster to simply render the page directly, without the compositing step.

Paint and composite producing the same bitmap

And there are lots of cases where layers just don’t help much. For example, if you animate background color, the whole layer has to be repainted anyway. These layers only help with a small number of CSS properties.

Even if most of your frames are best-case scenarios—that is, they only take up a tiny bit of the frame budget—you can still get choppy motion. For perceptible jank, only a couple of frames need to fall into worst-case scenarios.

Frame timeline with a few frames that go over the frame budget, causing jank

These scenarios are called performance cliffs. Your app seems to be moving along fine until it hits one of these worst-case scenarios (like animating background color) and all of the sudden your app’s frame rate topples over the edge.

Person falling over the edge of a cliff labeled animating background color

But we can get rid of these performance cliffs.

How do we do this? We follow the lead of 3D game engines.

Using the GPU like a game engine

What if we stopped trying to guess what layers we need? What if we removed this boundary between painting and compositing and just went back to painting every pixel on every frame?

This may sound like a ridiculous idea, but it actually has some precedent. Modern day video games repaint every pixel, and they maintain 60 frames per second more reliably than browsers do. And they do it in an unexpected way… instead of creating these invalidation rectangles and layers to minimize what they need to paint, they just repaint the whole screen.

Wouldn’t rendering a web page like that be way slower?

If we paint on the CPU, it would be. But GPUs are designed to make this work.

GPUs are built for extreme parallelism. I talked about parallelism in my last article about Stylo. With parallelism, the machine can do multiple things at the same time. The number of things it can do at once is limited by the number of cores that it has.

CPUs usually have between 2 and 8 cores. GPUs usually have at least a few hundred cores, and often more than 1,000 cores.

These cores work a little differently, though. They can’t act completely independently like CPU cores can. Instead, they usually work on something together, running the same instruction on different pieces of the data.

CPU cores working independently, GPU cores working together

This is exactly what you need when you’re filling in pixels. Each pixel can be filled in by a different core. Because it can work on hundreds of pixels at a time, the GPU is a lot faster at filling in pixels than the CPU… but only if you make sure all of those cores have work to do.

Because cores need to work on the same thing at the same time, GPUs have a pretty rigid set of steps that they go through, and their APIs are pretty constrained. Let’s take a look at how this works.

First, you need to tell the GPU what to draw. This means giving it shapes and telling it how to fill them in.

To do this, you break up your drawing into simple shapes (usually triangles). These shapes are in 3D space, so some shapes can be behind others. Then you take all of the corners of those triangles and put their x, y, and z coordinates into an array.

Then you issue a draw call—you tell the GPU to draw those shapes.

CPU passing triangle coordinates to GPU

From there, the GPU takes over. All of the cores will work on the same thing at the same time. They will:

  1. Figure out where all of the corners of the shapes are. This is called vertex shading.

GPU cores drawing vertexes on a graph

  1. Figure out the lines that connect those corners. From this, you can figure out which pixels are covered by the shape. That’s called rasterization.

GPU cores drawing lines between vertexes

  1. Now that we know what pixels are covered by a shape, go through each pixel in the shape and figure out what color it should be. This is called pixel shading.

GPU cores filling in pixels

This last step can be done in different ways. To tell the GPU how to do it, you give the GPU a program called a pixel shader. Pixel shading is one of the few parts of the GPU that you can program.

Some pixel shaders are simple. For example, if your shape is a single color, then your shader program just needs to return that color for each pixel in the shape.

Other times, it’s more complex, like when you have a background image. You need to figure out which part of the image corresponds to each pixel. You can do this in the same way an artist scales an image up or down… put a grid on top of the image that corresponds to each pixel. Then, once you know which box corresponds to the pixel, take samples of the colors inside that box and figure out what the color should be. This is called texture mapping because it maps the image (called a texture) to the pixels.

Hi-res image being mapped to a much lower resolution space

The GPU will call your pixel shader program on each pixel. Different cores will work on different pixels at the same time, in parallel, but they all need to be using the same pixel shader program. When you tell the GPU to draw your shapes, you tell it which pixel shader to use.

For almost any web page, different parts of the page will need to use different pixel shaders.

Because the shader applies to all of the shapes in the draw call, you usually have to break up your draw calls in multiple groups. These are called batches. To keep all of the cores as busy as possible, you want to create a small number of batches which have lots of shapes in them.

CPU passing a box containing lots of coordinates and a pixel shader to the GPU

So that’s how the GPU splits up work across hundreds or thousands of cores. It’s only because of this extreme parallelism that we can think of rendering everything on each frame. Even with the extreme parallelism, though, it’s still a lot of work. You still need to be smart about how you do this. Here’s where WebRender comes in…

How WebRender works with the GPU

Let’s go back to look at the steps the browser goes through to render the page. Two things will change here.

Diagram showing the stages of the rendering pipeline with two changes. The frame tree is now a display list an paint and composite have been combined into Render.

  1. There’s no longer a distinction between paint and composite… they are both part of the same step. The GPU does them at the same time based on the graphics API commands that were passed to it.
  2. Layout now gives us a different data structure to render. Before, it was something called a frame tree (or render tree in Chrome). Now, it passes off a display list.

The display list is a set of high-level drawing instructions. It tells us what we need to draw without being specific to any graphics API.

Whenever there’s something new to draw, the main thread gives that display list to the RenderBackend, which is WebRender code that runs on the CPU.

The RenderBackend’s job is to take this list of high-level drawing instructions and convert it to the draw calls that the GPU needs, which are batched together to make them run faster.

Diagram of the 4 different threads, with a RenderBackend thread between the main thread and compositor thread. The RenderBackend thread translates the display list into batched draw calls

Then the RenderBackend will pass those batches off to the compositor thread, which passes them to the GPU.

The RenderBackend wants to make the draw calls it’s giving to the GPU as fast to run as possible. It uses a few different techniques for this.

Removing any unnecessary shapes from the list (Early culling)

The best way to save time is to not do the work at all.

First, the RenderBackend cuts down the list of display items. It figures out which display items will actually be on the screen. To do this, it looks at things like how far down the scroll is for each scroll box.

If any part of a shape is inside the box, then it is included. If none of the shape would have shown up on the page, though, it’s removed. This process is called early culling.

A browser window with some parts off screen. Next to that is a display list with the offscreen elements removed

Minimizing the number of intermediate textures (The render task tree)

Now we have a tree that only contains the shapes we’ll use. This tree is organized into those stacking contexts we talked about before.

Effects like CSS filters and stacking contexts make things a little complicated. For example, let’s say you have an element that has an opacity of 0.5 and it has children. You might think that each child is transparent… but it’s actually the whole group that’s transparent.

Three overlapping boxes that are translucent, so they show through each other, next to a translucent shape formed by the three boxes where the boxes don't show through each other

Because of this, you need to render the group out to a texture first, with each box at full opacity. Then, when you’re placing it in the parent, you can change the opacity of the whole texture.

These stacking contexts can be nested… that parent might be part of another stacking context. Which means it has to be rendered out to another intermediate texture, and so on.

Creating the space for these textures is expensive. As much as possible, we want to group things into the same intermediate texture.

To help the GPU do this, we create a render task tree. With it, we know which textures need to be created before other textures. Any textures that don’t depend on others can be created in the first pass, which means they can be grouped together in the same intermediate texture.

So in the example above, we’d first do a pass to output one corner of a box shadow. (It’s slightly more complicated than this, but this is the gist.)

A 3-level tree with a root, then an opacity child, which has three box shadow children. Next to that is a render target with a box shadow corner

In the second pass, we can mirror this corner all around the box to place the box shadow on the boxes. Then we can render out the group at full opacity.

Same 3-level tree with a render target with the 3 box shape at full opacity

Next, all we need to do is change the opacity of this texture and place it where it needs to go in the final texture that will be output to the screen.

Same tree with the destination target showing the 3 box shape at decreased opacity

By building up this render task tree, we figure out the minimum number of offscreen render targets we can use. That’s good, because as I mentioned, creating the space for these render target textures is expensive.

It also helps us batch things together.

Grouping draw calls together (Batching)

As we talked about before, we need to create a small number of batches which have lots of shapes in them.

Paying attention to how you create batches can really speed things up. You want to have as many shapes in the same batch as you can. This is for a couple of reasons.

First, whenever the CPU tells the GPU to do a draw call, the CPU has to do a lot of work. It has to do things like set up the GPU, upload the shader program, and test for different hardware bugs. This work adds up, and while the CPU is doing this work, the GPU might be idle.

Second, there’s a cost to changing state. Let’s say that you need to change the shader program between batches. On a typical GPU, you need to wait until all of the cores are done with the current shader. This is called draining the pipeline. Until the pipeline is drained, other cores will be sitting idle.

Mulitple GPU cores standing around while one finishes with the previous pixel shader

Because of this, you want to batch as much as possible. For a typical desktop PC, you want to have 100 draw calls or fewer per frame, and you want each call to have thousands of vertices. That way, you’re making the best use of the parallelism.

We look at each pass from the render task tree and figure out what we can batch together.

At the moment, each of the different kinds of primitives requires a different shader. For example, there’s a border shader, and a text shader, and an image shader.

 

Boxes labeled with the type of batch they contain (e.g. Borders, Images, Rectangles)

We believe we can combine a lot of these shaders, which will allow us to have even bigger batches, but this is already pretty well batched.

We’re almost ready to send it off to the GPU. But there’s a little bit more work we can eliminate.

Reducing pixel shading with opaque and alpha passes (Z-culling)

Most web pages have lots of shapes overlapping each other. For example, a text field sits on top of a div (with a background) which sits on top of the body (with another background).

When it’s figuring out the color for a pixel, the GPU could figure out the color of the pixel in each shape. But only the top layer is going to show. This is called overdraw and it wastes GPU time.

3 layers on top of each other with a single overlapping pixel called out across all three layers

So one thing you could do is render the top shape first. For the next shape, when you get to that same pixel, check whether or not there’s already a value for it. If there is, then don’t do the work.

3 layers where the overlapping pixel isn't filled in on the 2 bottom layers

There’s a little bit of a problem with this, though. Whenever a shape is translucent, you need to blend the colors of the two shapes. And in order for it to look right, that needs to happen back to front.

So what we do is split the work into two passes. First, we do the opaque pass. We go front to back and render all of the opaque shapes. We skip any pixels that are behind others.

Then, we do the translucent shapes. These are rendered back to front. If a translucent pixel falls on top of an opaque one, it gets blended into the opaque one. If it would fall behind an opaque shape, it doesn’t get calculated.

This process of splitting the work into opaque and alpha passes and then skipping pixel calculations that you don’t need is called Z-culling.

While it may seem like a simple optimization, this has produced very big wins for us. On a typical web page, it vastly reduces the number of pixels that we need to touch, and we’re currently looking at ways to move more work to the opaque pass.

At this point, we’ve prepared the frame. We’ve done as much as we can to eliminate work.

… And we’re ready to draw!

We’re ready to setup the GPU and render our batches.

Diagram of the 4 threads with compositor thread passing off opaque pass and alpha pass to GPU

A caveat: not everything is on the GPU yet

The CPU still has to do some painting work. For example, we still render the characters (called glyphs) that are used in blocks of text on the CPU. It’s possible to do this on the GPU, but it’s hard to get a pixel-for-pixel match with the glyphs that the computer renders in other applications. So people can find it disorienting to see GPU-rendered fonts. We are experimenting with moving things like glyphs to the GPU with the Pathfinder project.

For now, these things get painted into bitmaps on the CPU. Then they are uploaded to something called the texture cache on the GPU. This cache is kept around from frame to frame because they usually don’t change.

Even though this painting work is staying on the CPU, we can still make it faster than it is now. For example, when we’re painting the characters in a font, we split up the different characters across all of the cores. We do this using the same technique that Stylo uses to parallelize style computation… work stealing.

What’s next for WebRender?

We look forward to landing WebRender in Firefox as part of Quantum Render in 2018, a few releases after the initial Firefox Quantum release. This will make today’s pages run more smoothly. It also gets Firefox ready for the new wave of high-resolution 4K displays, because rendering performance becomes more critical as you increase the number of pixels on the screen.

But WebRender isn’t just useful for Firefox. It’s also critical to the work we’re doing with WebVR, where you need to render a different frame for each eye at 90 FPS at 4K resolution.

An early version of WebRender is currently available behind a flag in Firefox. Integration work is still in progress, so the performance is currently not as good as it will be when that is complete. If you want to keep up with WebRender development, you can follow the GitHub repo, or follow Firefox Nightly on Twitter for weekly updates on the whole Quantum Render project.

Read the whole story
careyhimself
7 days ago
reply
Christchurch, New Zealand
Share this story
Delete

A Candle Loses Nothing by Lighting Another Candle

1 Share

"Do you know what the trace of a matrix is?" he asked in conversation about math we enjoyed. The context was networking-- two professionals in the industry getting to know each other better at an event.
"No," I replied.
"I thought you were at a higher math level," he replied. He sighed and started explaining basic linear algebra concepts.

He then asked me what exactly my company does. I politely told him I build a texture compressor. He told me that that was pretty easy, boring work, but I guess good enough to pay the bills.

I'm noticing a high spike in this attitude lately, as my company's gotten more visibly successful.

And I think it's because when we believe someone is successful, we can choose our reaction: resentment, or joy.

I'm established now. I own a great company. I love my work and have happy customers and supportive people in my life. So I can see straight through the resentment for what it is: insecurity and disappointment in their own image of themselves. Fear that they'll be found out if they don't act smart. Putting people down so they aren't a threat.

This resentment manifests in a few major ways:

  • People asking me software/math trivia or throwing around obscure terms and acting surprised when I don't know them. People grasping at every time I say "I don't know" to prove my incompetence.
  • People simply telling me I'm not actually that successful, or my work isn't actually that valuable or enjoyable, or I'm not actually this happy.
  • And they often have networks of other people who have this attitude.

But you see, the people who react with joy:

  • Ask me how they can help me. Give without expecting to receive if they're in a position to do so.
  • Emanate a joy that's contagious and brightens my whole day, making me want to live life more fully.
  • Ask questions that come from a place of curiosity and enjoyment of life, not trying to prove who's smarter or more successful.
  • Introduce me to more kind-hearted people.

What is that saying? "A candle loses nothing by lighting another candle." These people live that quote.

It's important for me to acknowledge this because it bleeds into business. I've learned very quickly to avoid doing business with those who react to my success with resentment. Even if they aren't doing it on purpose, even if they aren't malicious, I still keep my distance.

It is a decision that cost me some short term profits early on.

But it is a decision that has more than paid off, over and over and over.

I kindly thank them for their willingness to talk to me, and move back to those who treat others with joy and happiness.

Read the whole story
careyhimself
74 days ago
reply
Christchurch, New Zealand
Share this story
Delete

Gout.

1 Share

A lovely epigram by Thomas Erskine:

The French have taste in all they do,
Which we are quite without;
For Nature, which to them gave goût,
To us gave only gout.

Read the whole story
careyhimself
229 days ago
reply
Christchurch, New Zealand
Share this story
Delete

Robert E McGinnis and the Secret of The New Cover

1 Comment and 2 Shares
I've loved Robert McGinnis's covers for a very long time. I remember the first one I was aware of (it was the cover of Ian Fleming's  James Bond book DIAMONDS ARE FOREVER, when I was about 9. They put the film poster on the book cover, which puzzled me a bit because the plot of the book isn't the plot of the film.) And I assumed that he had retired a long, long time ago.

About a year ago, Jennifer Brehl and I were talking. Jennifer is my editor at William Morrow, and is one of the best, most sensible and wisest people in my life. I am lucky to have her. We were talking about paperbacks, and how publishers put less effort into them these days. I went off about how paperback covers used to be beautiful, and were painted, and told you so much. And how much I missed the covers of the '50s and '60s and '70s, the  ones I'd collected and bought back in the dawn of time.

And somehow the conversation wound up with me asking if Harper Collins would publish a set of mass market paperbacks of my books with gloriously retro covers and Jennifer saying that yes, they would.

A few days later I was in DreamHaven Books in Minneapolis. I noticed a particularly gorgeous cover on an old book on a shelf. "Who did that?" I asked Greg Ketter.

"Robert McGinnis," said Greg. "Actually we have a whole book of McGinnis artwork." He showed it to me. The Art of Robert E. McGinnis. It's gorgeous. Here's the cover:

http://amzn.to/2aLcYg2

I was surprised at how recent the book was. It had been published a few months earlier. "Oh yes," said Greg. "Bob's still painting. Must be almost 90."

(He was 90 in February 2016.)

I sent a note to Jennifer asking if there was even the slightest possibility that Mr McGinnis would be interested in painting the covers for the paperback set we wanted to do. He said yes. Todd Klein, the finest letterer in comics, came in to create each book's logo and to help design it and pick the fonts, to make each book feel like it came from a certain age.

Each painting from McGinnis was better than the one before. Each Logo and layout from Todd Klein was more assured and more accurate. These things are glorious.

Now... we were planning to announce these in an much more planned and orderly way. I'm not going to tell you what books we're doing, or to show you any covers but the one.

And that's because the upcoming 2017 Starz American Gods TV series has created a huge demand for copies of American Gods. People who have never read it have started buying it to find out what the fuss is about. People who read it long ago and gave away their copy bought new ones to reread it.

The publishers ran out of books to sell.

So they've rushed back to press with the new paperback edition, which wasn't meant to be coming out for some months (and the text is the text of the Author's Preferred edition in case you were wondering).

And that means the version of the paperback with the new cover is going to be coming out a lot sooner than we thought. And tomorrow it will probably up on Amazon.

And I wanted you to hear it from me first.  You aren't going to see the rest of the Robert E McGinnis covers for a little while (and each of them looks like a different kind of book from a different era). But this is the first of them.

In my head, and Todd's, it's probably from about 1971...

Are you ready?
















Okay....




Here goes...








...and wait until you see the rest of them.


Read the whole story
careyhimself
447 days ago
reply
Christchurch, New Zealand
Share this story
Delete
1 public comment
brennen
440 days ago
reply
That style sure does ring a bell.
Boulder, CO

When does heart disease begin (and what this tells us about prevention)?

1 Share

I know, I know, I said I was going to limit things to one post per year, but the last one doesn’t really count and while the year isn’t even half over I’m willing to predict nothing will inspire me more to write a post in the next 6 months than Allan Sniderman’s recent editorial piece in JAMA Cardiology. More on that momentarily.

Before I get into this post I want to lay a few things out.

  1. This post is written mostly for doctors, but also for patients who really want to understand this topic, if for no other reason than to help them choose the right doctors. I don’t go out of my way to simplify the terminology and I assume the reader is familiar with the topics covered the cholesterol series I wrote three or four years ago. If you encounter a term you don’t understand, Google is a pretty good place to find the definition.
  2. I will not use this post to in any way get into prescriptive strategies, which involve modifications of nutrition, hormones, and yes, lots of drugs across four or five classes (i.e., much more than just statins) depending on the specific situation at hand and the risk appetite of the patient and physician, as well as the other comorbidities that must be co-managed. Even if I wanted to write out all of my prescriptive leanings I could not do it briefly.
  3. Please do not email your lab results or ask me to weigh in on your case. You know the disclaimer: I can’t practice medicine on a blog or over email.

There was a day when the only thing I argued about was who the greatest boxer of all time was. (I’m fighting all urges to turn this post into a manifesto of 1965-67 Muhammad Ali vs. 1938-41 Joe Louis vs. 1940’s Sugar Ray Robinson vs. 1937-42 Henry Armstrong.)

Today, however, I find myself arguing about so many things—some of them actually important—from why symptomatic women should receive hormone replacement therapy after menopause (and, by extension, why the Women’s Health Initiative tells us so, if you know how to read it) to why monotherapy with T4 for hypothyroidism is a recipe for disaster for most patients. But there is nothing I find myself debating more than the misconceptions most doctors have about heart disease. This is especially troubling since heart disease kills more Americans than any other disease. To put this in perspective, a woman in the United States is 7 to 8 times more likely to die from heart disease than she is from breast cancer.

Here are the typical arguments put forth, almost always by doctors, which invariably result in my need/desire to counter:

  1. Heart disease is caused by too much “bad” cholesterol (LDL-C).
  2. LDL-C is the only target of therapy you need to worry about.
  3. Calcium scores and CT angiograms (CTA) are great ways to further risk-stratify (the corollary: when these tests are negative, there is no need to treat the patient).
  4. Atherosclerosis is a “pipe narrowing” disease (ok, nobody uses these words, but they imply this by saying it’s a luminal narrowing disease).
  5. There is no role for preventatively treating young people, except in very rare cases like familial hypercholesterolemia.

Briefly, here are my counters:

  1. Atherosclerosis is caused by an inflammatory response to sterols in artery walls. Sterol delivery is lipoprotein-mediated, and therefore much better predicted by the number of lipoprotein particles (LDL-P) than by the cholesterol they carry (LDL-C) [Bonus point: always measure Lp(a)-P in your patients—but we’ll have to save Lp(a) for another day; it certainly owns its own blog post]
  2. Ditto point #1. And don’t ever bring up LDL-C again.
  3. Calcium scores and CT angiograms of exceptional quality (the operative word being exceptional—most are not) are helpful in a few settings, but this assertion is patently false, and I will leave this discussion for another blog post as the topic is too rich in nuance for a few lines.
  4. We’ll discuss this today.
  5. By necessity, we’ll be forced to confront this today, also.

Before diving into this topic it’s really important for me to acknowledge the person who has taught me almost everything I know about this disease, beginning back in 2011 when I first became aware that I basically had no idea what atherosclerosis was. For the past 5 years Dr. Tom Dayspring’s generosity has been remarkable and I’m humbled to be his most sponge-like student. Tom has not only given me an on-the-side lipidology fellowship, but he has also introduced me to the finest lipidologists and cardiologists in the country who have, in turn, been incredibly generous with their time and knowledge. I’m not the only one to benefit from Tom’s wisdom and generosity. I had dinner with Tom’s son and his wife once and I described Tom to them as a national treasure. That’s really how I feel about him. He is a nationally-recognized educator and his writing and presentations are devoured by fanatics like me across the globe. With Tom’s permission, I’ve deconstructed a video he put together into a series of figures which I’ll use to begin this discussion of how atherosclerosis actually takes place.

The physics of luminal narrowing

Traditionally, the atherosclerotic process was believed to involve plaque accumulation that prompted the gradual narrowing of the lumen, with the eventual development of stenosis. Stenosis then caused impaired control of flow (stable angina) and plaque rupture and thrombosis (unstable angina and MI). Consequently, prevailing opinion held that coronary angiography would be able to gauge the atherosclerotic process at all stages of disease.

However, in 1987, Glagov and colleagues proposed an alternative model of atherosclerosis development. After performing histological analyses of coronary artery sections, Glagov et al. reported that early atherosclerosis was characterized by plaque accumulation in the vessel wall and enlargement of the external elastic membrane (EEM) without a change in lumen size.

As atherosclerosis progressed, they found that plaque continued to accumulate in the vessel wall until the lesion occupied approximately 40% of the area within the EEM. At this point, the lumen area began to narrow. These findings have since been confirmed by intravascular ultrasound (IVUS). Due to the complex remodeling that occurs in the earlier stages of atherosclerosis, coronary angiography, which only visualizes the lumen, tends to underestimate the degree of atherosclerosis. In other words, atherosclerosis is well under way long before angiography is able to identify it.

I was reminded of the words of my Pathology professor back in the first year of medical school, “The only doctors who actually understand atherosclerosis are pathologists.” I would add lipidologists to that list, but I saw his point.

Most people, doctors included, think atherosclerosis is a luminal-narrowing condition—a so-called “pipe narrowing” condition. It’s true that eventually the lumen of a diseased vessel does narrow, but this is sort of like saying the defining feature of a subprime collateralized debt obligation (CDO) is the inevitable default on its underlying assets. By the time that happens, eleven other pathologic things have already happened and you’ve missed the opportunity for the most impactful intervention to prevent the cascade of events from occurring at all.

To reiterate: atherosclerosis development begins with plaque accumulation in the vessel wall, which is accompanied by expansion of the outer vessel wall without a change in the size of the lumen. Only in advanced disease, and after significant plaque accumulation, does the lumen narrow.

Michael Rothberg wrote a fantastic article on the misconception of the “clogged pipe” model of atherosclerosis. He opens with the following story:

A recent advertisement on the back cover of a special health issue of the New York Times Magazine section read “Ironic that a plumber came to us to help him remove a clog.” The ad referred to doctors in the cardiac catheterization laboratory as “one kind of pipe specialist,” and noted that the patient in the ad returned to work “just 2 days after having his own pipes cleaned out.” Although the image of coronary arteries as kitchen pipes clogged with fat is simple, familiar, and evocative, it is also wrong [emphasis mine].

Dr. Rothberg goes on to explain that for patients with stable disease, local interventions can only relieve symptoms; they do not prevent future myocardial infarctions. To be clear, at least 12 randomized trials conducted between 1987 and 2007, involving more than 5,000 patients, have found no reduction in myocardial infarction attributable to angioplasty in any of its forms. And yet, despite this overwhelming evidence, the plumbing model, complete with blockages that can be fixed, continues to be used to explain stable coronary disease to patients, who understandably assume that angioplasty or stents will prevent heart attacks—which they patently do not.

The root of the problem, in my view at least, is that we as doctors—and by extension, our patients and media—spend too much time looking at images like these (angiograms of coronary arteries complete with “clogged pipes”):

b1eebf133f6bea4f15f4b2ecf14f62ac 8854487

And not enough time looking at images like these (the histological, i.e., pathology, sections of coronary arteries):

histology of progression

But who can blame us, I mean, angiograms are cool! But, alas, it’s time to get serious about understanding this disease if we want to prevent/delay it.

Atherosclerosis, for the cognoscenti

Ok, so now let’s get rigorous about the disease that kills more Americans than any other disease. To understand this, as Frederic Bastiat wrote long ago, we must resort to “long and arid dissertations.” Buckle up.

The following figures were constructed from a video Tom Dayspring produced in one of his stellar lectures on the development of atherosclerosis. I’ve broken the video down into 20 or so steps which show the transition from a completely normal endothelium (i.e., at birth) through myocardial infarction. Each figure is preceded by a brief explanation of its content.

 

The endothelium is a protective one cell layer lining the surface of the artery lumen. Endothelial cells perform many complex functions and are capable of modulating vascular tone, as well as inflammatory and thrombotic processes. Their function depends on many circulating and local factors.

1

Low density lipoprotein (“LDL”) is a lipid (the bulk of which is cholesterol) transport particle. Please re-read this sentence. It is not “bad cholesterol,” a term that has no meaning. LDL—the particle—allows lipids (cholesterol, but also triglyceride, phospholipid) to be delivered through the aqueous medium of the blood, since lipids are hydrophobic (i.e., repel water) and a “carrier” is needed to transport them in blood (which is mostly water).

If LDL particles are present in physiologic (i.e., normal) concentrations, they effectively deliver cholesterol to those tissues that require it (recall: all tissues make cholesterol but some don’t make enough for their own needs and therefore cholesterol needs to be trafficked around the body).

The term LDL particle, LDL-P, and apoB are used interchangeably (the latter, because LDL particles are defined by the wrapping of a lipoprotein called apolipoprotein B-100).

2

When LDL particle concentration is elevated, the lipoprotein penetrates into the subendothelial space. Once in the intimal layer, they are securely attached to intimal proteoglycan molecules. The first step in atherogenesis is surface phospholipid (PL) exposure to reactive oxygen species and oxidation of the PL. LDL particles that are not oxidized are not atherogenic.  To be clear, it’s not the “getting in there” part that is the problem (HDL particles do this all the time and so do LDL particles, for that matter), it’s the “getting stuck and oxidized in there” part, formally known as retention and oxidation.

3

Once retained in the subendothelial space, the LDL particle may be modified, or oxidized (the clusters of yellow circles in the subendothelial space, below).

4

Oxidized LDL particles are toxic to the endothelium.  Now-dysfunctional endothelial cells express selectins and vascular cell adhesion molecules (VCAMS) which mark the injured areas of the vascular wall. It is worth pausing here for a moment. This step is kind of the turning point in the story. It’s also a perfectly “normal” thing for the epithelium to do. They sense a problem and like any law-abiding tissue, they ask for help from law enforcement. Think of the selectins and VCAMS as 911-calls. The police, who show up shortly, are the monocytes.

5

Selectins and VCAMS increase monocyte adherence to the endothelium.

6

The endothelial cells also express messenger cytokines such as interlukin-6 (IL-6) and tumor necrosis factor (TNF) which circulate to the liver and induce the production of C-reactive protein (CRP).

7

Monocytes penetrate the subendothelial space…

8

…and when they do, the monocytes differentiate into (i.e., “become”) macrophages (a more specific type of immune cell). Macrophages phagocytize (basically “ingest”) the modified or oxidized LDL particles.

10

The phagocytosis of oxidized LDL particles (oxLDL) and accumulation of lipid in the macrophage creates something called a foam cell.

12

Multiple foam cells coalesce to form the characteristic fatty streak, the hallmark of an early atherosclerotic plaque. Keep this mind. Later in this post we’ll come back to “fatty streaks” and I want you to remember how much has taken place to get us to this stage. I will posit that no one reading this post does not have fatty streaks unless there are some prodigious 5-year-olds reading this.

13

Nascent Apo A-I containing particles (also known as prebeta-HDL particles) accumulate free or unesterified cholesterol from the macrophages using ATP Binding Cassette Transporters A1.

14

As the HDL particle lipidates itself by delipidating the macrophage (i.e., takes lipid out of the foam cell), the HDL particle, utilizing an enzyme called LCAT, esterifies the free cholesterol forming cholesteryl ester (CE). As a result, the HDL particle enlarges and is free to be delipidated at a variety of tissues. HDL delipidation occurs through multiple mechanisms using Cholesteryl Ester Transfer Protein (CETP), aqueous free diffusion, SRB1 receptors in endocrine glands or gonads (e.g., to make hormones), adipocytes (a major cholesterol storage organ) or the liver (e.g., to package for biliary delivery). HDLs can also be internalized as entire particles by surface liver receptors.

A brief, but important, digression: the complexity, above, is probably the reason why every trial that has tried to increase the concentration of cholesterol in HDL particles (i.e., raise HDL-C) has failed, and failed epically, to reduce events. The value in HDL particles (the so-called “good cholesterol”—my god I hate that term as much as I hate the term “bad cholesterol” when referring to LDL) is almost assuredly in its functional capacity—what it is doing that might be cardioprotective (very hard to measure) rather than its cholesterol content (HDL-C), which is relatively easy to measure, but probably offers zero insight other than its positive epidemiological associations. In other words, measuring HDL cholesterol content tells you little about its cholesterol efflux capacity or any other of the numerous HDL functional properties.

15

Macrophages that become engorged with oxLDL remain full-fledged foam cells. Foam cells produce Angiotensin II, metalloproteinases, collagenases, elastases, and other proteins, which undermine the integrity of the arterial wall, causing more endothelial dysfunction. This is where the process goes to hell. Now you’ve got a damaged barrier and the looting begins. Oh, by the way, the process to date goes unnoticed by the calcium scan and CTA.

16

Chemotactic factors (i.e., chemical signals) trigger migration of smooth muscle cells to the area of injury in an attempt to repair the damage and pervert further disruption. Again, all of this is taking place in good faith on the part of the immune system. The smooth muscle cells are transformed into secretory cells that lay down a matrix to heal the injured wall. This matrix becomes the fibrous cap of the atherosclerotic plaque. Only now is the arterial lumen becoming encroached.

18

Metalloproteinases and collagenases are upregulated and they start to dissolve or weaken the plaque cap, typically at the shoulder regions where the diseased endothelial cells meet healthy endothelial cells. Some have used the term “vulnerable” to describe such plaques, which may be a correct term, but it also gives a false sense of confidence that we can treat atherosclerosis on a lesion-by-lesion basis. History has taught us that such hubris is unwarranted. Until proven otherwise, atherosclerosis should be viewed as a systemic condition of the arterial system. To see one of the best (and my favorite) papers on this topic, look no further than to this paper by Armin Arbab-Zedeh and the venerable Valentin Fuster, aptly titled The Myth of the Vulnerable Plaque.

19

The plaque can become obstructive—i.e., it can obstruct the lumen—over time. Lipid rich plaques are unstable, and can rupture. Platelets adhere to the ruptured surface of the plaque through electrostatic factors and through binding to specific ligands.

20

The platelets then serve as a cradle for the coagulation cascade to produce a net of fibrin (the white “net” in the figure), leading to a red clot, as red blood cells are caught in the net. A non-obstructive plaque can lead to a clinical event after the superimposition of red and white thrombus, which can occur quickly and without warning. The degree of stenosis (luminal narrowing) does not predict when this will happen.

21

Why all of this matters

So back to this impetus for this post—Allan’s editorial. I met Dr. Allan Sniderman four years ago through Gary Taubes and Allan and I immediately hit it off. Like Tom, Allan has been a remarkable mentor and teacher. In fact, one of the gifts Allan gave me a while back was his personal copy of Herbert Stary’s legendary pathology textbook on atherosclerosis, Atlas of Atherosclerosis Progression and Regression. I devoured this textbook and have since purchased additional copies to always have one on hand. Ask my patients…most of them have had to sit through viewings of it like it was a vacation photo album.

About 18 months ago Allan and I were having dinner and discussing our favorite topic. Allan asked me to guess what fraction of cardiac events (“event” is a pretty common word in the vernacular of cardiovascular disease and basically refers to a Q-wave MI, the need for re-vascularization, or cardiac death) take place in North America in those younger than 65. I knew it was a loaded question, so I rounded my guess up to 25%. I was wrong. How wrong I wouldn’t find out until Allan and his colleagues completed their analysis which formed the basis for their editorial recently published in JAMA Cardiology. Add it to your weekly reading list.

You don’t have to be a lipidologist or a pathologist to understand this paper, but it helps to understand the basics of math—big denominators can drown out even modest numerators. The aggregate figure from this paper is one of the most elegant representations of a dot product. The subfigure on the left is how most doctors (myself included until a few years ago) and authorities think of coronary heart disease (CHD)—it’s virtually silent until the 7th or 8th decade of life. What folks who think this way miss is the middle figure—the population base (the “denominator”) is shrinking while the incidence (the “numerator”) is rising. In a situation like this, the only way to really see what’s happening is to do what Sniderman et al. did—calculate the absolute event rate, as shown on the right.

Sniderman

Yes, you’re reading this graph correctly. A little over half of all events in men (24% + 28%) and a little less than a third of events in women (13% + 19%) take place below the age of 65.

Tying these two insights together

Insight #1: Atherosclerosis takes a long time to evolve, and involves many steps.

Insight #2: Many cardiovascular events—half in men and one-third in women—take place in young people (i.e., those 64 or younger)

How do we reconcile these findings? Enter the pathologists. As I mentioned above, my first year pathology professor in med school insisted that pathologists were the only doctors who really understood heart disease, because they actually did the autopsies and examined the coronary arteries under microscopes.

And this brings me to my point. The only way Insight #1 and Insight #2 can be correct is if atherosclerosis takes a long time to develop. Any guesses as to what the greatest single risk factor is for heart disease? Smoking? Nope. High blood pressure? Nope. The wickedly deadly particle I have yet to write the most deserving post on, Lp(a)? Nope. LDL-P or apoB? Nope. LDL-C? I thought I told you to never say that again. CRP? Nope. None of these things. It’s age. Age trumps everything. In this sense, atherosclerosis is an “integral” disease (in the calculus sense of the word)—meaning it’s a disease of compounding injuries, as I painstakingly went through above. Age = persistent exposure to LDL-P/apoB.

Just like wealth is compounded in a highly non-linear way, so too is illness and no disease to my knowledge does so more clearly than atherosclerosis. So the jugular question is when do we need to start treating patients? That is a question I can’t answer for you. Not because I don’t have a point of view, which I most certainly do, but because it comes down to risk tolerance. I can no more impart my world view of this problem on you (though the answer seems painfully obvious to me) as I can my world view of how to raise kids or combat ISIS. But I do hope to leave you with a clear picture, at least of the disease process.

Perhaps the greatest insights into the pathogenesis of atherosclerosis, especially at they pertain to age, comes via autopsies of two variants: those of people known to die of some cause other than heart disease and those known or suspected of cardiac death. The table, below, taken from this paper, summarizes the six stages of atherosclerosis. Stary’s stages are identical, except that he further divides the sixth of these stages into three stages, for a total of eight stages, but the points remain the same. The points being, of course:

  1. In the first decade of life fatty steaks are being formed. That’s right, before the age of 10.
  2. Atheromas are present by the time most people are in their 20’s.
  3. By the time you are in your 30’s you are quite likely to have fibroatheroma formation.
  4. The vast majority of atherosclerosis-initiating sterols get into the artery as “passengers” in apoB-containing particles most of which (90%) are LDL particles.

F5.large

I’ll close with another interesting study, published in Circulation in 2001. The title of the study—High Prevalence of Coronary Atherosclerosis in Asymptomatic Teenagers and Young Adults—pretty much tells you what they found. The authors performed intravascular ultrasound (IVUS) on 262 heart transplant recipients about a month following their transplant. Before going any further, it’s worth pointing out that IVUS is not as sensitive as pathologic sectioning so this study is likely underestimating the degree of disease present in the hearts (but obviously one can’t section the coronary arteries of donor hearts). By doing the IVUS on the recipients so soon after they received their transplant the authors were able to study the hearts of the donors, many of whom were quite young.

The figures below, both taken from the paper, show the frequency distribution and prevalence of intimal thickening for each age cohort of donors. Consistent with the pathology studies, which actually cut open the arteries and examined them histologically, the IVUS study found a similar trend. Namely, atherosclerosis is starting much sooner than previously recognized. In this series of 262 heart donors, one out of six hearts donated by a teenager was found to have clinically measurable atherosclerosis. The authors conclude: “These findings suggest the need for intensive efforts at coronary disease prevention in young adults.”

Fig 3

Fig 3

Final thoughts

I was 35 years old, the year my daughter was born, when I first confronted my own inevitable demise which, based on my family history, was likely to be a cardiovascular one. I’m sure I’d be better off if I had that epiphany when I was 25 or possibly 15, but I’m glad it happened when it did. Today, I manage all modifiable risk factors to the level of my risk appetite and interpretation of the most nuanced scientific literature on cardiovascular disease. Does this mean I won’t succumb to heart disease? Of course not. But this is a stochastic game and the objective of the game is to increase the odds in your favor while delaying the onset of bad outcomes. It’s up to each person, and their doctor if necessary, to determine how aggressively they want to confront the inevitable—we all have atherosclerosis at some level.

The post When does heart disease begin (and what this tells us about prevention)? appeared first on The Eating Academy | Peter Attia, M.D.. See the original post When does heart disease begin (and what this tells us about prevention)?.

Read the whole story
careyhimself
477 days ago
reply
Christchurch, New Zealand
Share this story
Delete
Next Page of Stories