Power Supply Design Tutorial

This series of tutorials explains in-depth design steps for the buck and the boost topology DC-DC switching regulators, supplemented by dedicated sessions on PCB layout and signal edge control for EMI that apply to all switching regulators. This tutorials series is split into 15 parts and provides details, hints and tips which are useful even to the most veteran power supply designers. Beginners who have never designed a power supply can use this series as a start. But those engineers who already have some power supply design experience and wish to gain more depth of knowledge are going to benefit the most.

  • Part 1: Topologies and Fundamentals
  • Part 2: The Buck Regulator
  • Part 3: PCB Layout for Switchers
  • Part 4: The Boost Regulator
  • Part 5: Switching Edge Control for EMC
  • Part 6: The SEPIC Regulator
  • Part 7: The C*k Regulator
  • Part 8: Switching Regulator Control Loops
  • Part 9: Input Filters for DC Input Switchers
  • Part 10: The AC-DC Flyback Regulator
  • Part 11: Flyback Transformer Design
  • Part 12: Isolated Power Solutions
  • Part 13: Input Filters for AC Input Switchers
  • Part 14: The LLC Resonant Half-bridge Regulator
  • Part 15: Power Factor Correction

Within this series we will issue a new tutorial every week. The first one is available now. Here is the agenda:

  • Linear regulators and low dropout (LDO) regulators
  • Power dissipation and thermal management of power semiconductors
  • Switching regulator introduction and the three-terminal element
  • The three basic converter topologies: buck, boost and inverting buck-boost
  • Inductors and inductor currents

Welcome to Part 1-1 of the Power Induced Design Power Supply Design Series, Topologies and Fundamentals, brought to you by Power Electronics News. If you’ve already designed 100 buck regulators, you can probably skip this session, but I bet there’s at least something in these next 25 slides or so that even seasons professionals would find useful or insightful. In this session, we’re going to look at linear power supplies, thermal management, and then the basis of DC to DC switching power supplies.

Here, you can see me, I’m your host, Chris Richardson, and a couple of badge photos from my history, first in 2005 at National Semiconductor, later on at Texas Instruments in 2011 after they bought National Semiconductor, and finally in 2013, when I founded Power Induced Design. If you want to get in touch with me, there’s my email, info@powerinduced.com. You can see here that the hair changes a little bit, but that cheesy smile always stays the same.

In the Beginning, There Was Discrete

This slide is for all my viewers who love both science fiction and power management. It would be more accurate to say that in the beginning there were vacuum tube based power supplies, so let’s call this the quote on quote beginning of modern power supplies. Now, this circuit is very simple. There’s a reference, usually a Zener diode, a current limiting resistor to keep the reference from overheating, and a pass element. I’ve drawn the pass element as an NPN transistor, but n-MOSFETs with work, too. The final resistor represents the load.

Discrete Is Alive and Well

This type of circuit is used all the time to get supplies that run from high voltage but use low voltage silicon to start up. What isn’t shown is the connection to the line coming into the emitter of Q1, which is an output from an auxiliary winding. As soon as that auxiliary voltage exceeds the combination of VZ plus VBE, that is to say, the Zener voltage breakdown plus a VBE transistor, then Q1 turns off and barely any power is dissipated.  Also, the reason that bipolar transistors are preferred over MOSFETs, even though MOSFETs have a lot more selection, is that it’s harder to know how much reverse voltage is needed from source to gate to make sure a MOSFET is really and truly off. Another note. If your AC to DC or HVDC supply doesn’t start up during initial testing, it’s almost always because of the setup circuit and or the aux winding.

Then Came the Integrated NPN

The next evolution after discrete linear regulators was the integrated NPN regulator. That integrated comes from integrated circuit, or IC. These days, most any linear regulator is often called an LDO, which stands for low dropout regulator. In general, real NPN regulators are not actually very low in dropout, and we’ll see why on the next slide. But even NPN regulator, since there are several configurations with PNPs or with MOSFETs.

And this is a good time to define what dropout voltage actually is. That’s the minimum amount of head room, that means the difference in voltage needed between the input voltage and the output voltage, in order to keep that output voltage regulated. As you might imagine, that’s a big concern as the minimum voltage gets closer and closer to the maximum output voltage of a given linear regulator, also, especially when regulators have dropout voltages, too, that are often more difficult to calculate.

Now, one nice thing about integration, besides having less parts to select and place, and, of course, their excellent stability, is that for an IC regulator, all that silicon is at close to the same temperature, and that’s very good for the stability of the circuit.

There are two terms here on this page that I want to talk about more. One is PSRR. That’s power supply rejection ratio. It’s also known as audio susceptibility. This is the ability of the power supply to reject differential mode noise that’s present between the positive input, which is labeled here as VN, and the negative input, which is implied but not explicitly shown. Those are the ground symbols.

Then, there’s the common mode rejection ratio. That’s CMRR, and it’s more aptly named. That refers to the power supply’s ability to reject common mode noise, that is to say, noise that’s present between either the positive input and Earth or the negative input and Earth.

Limitations of the NPN Darlington

So here on this slide, we can see the internal details of a classic NPN regulator, also known as a Darlington regulator. As you can see, there are two NPN transistors and one PNP transistor that are in series with the control path. And when I first looked at this circuit, I thought to myself, “Hmm, there’s only one VCE drop from V input to V output. So why won’t this circuit work down to, say, 500 millivolts of dropout voltage?” But it’s actually the control circuit that needs those two VBE voltages and one VCE. Now, summed, up, that’s 0.7 volts plus 0.7 volts plus 0.3 volts, and that’s getting you pretty close to two volts.

So with no control, there’s no stable V out, and that’s why a standard NPN Darlington regulator, like this one, would not work reliably when trying to drop, say, 5.0 volts down to 3.3 volts. You’re likely to run into dropout, especially when you know that there’s a tolerance in that 5 volts usually of, say, plus or minus 5 or plus or minus 10 percent.

So dropout is clearly a bad thing since the output voltage is no longer regulated, but another problem is that when a circuit is in dropout, all the noise at the input passes through to the output with almost no attenuation.

Power Dissipation in Linear Regulators

So for power dissipation, let’s say in the world of power supplies, there are basically three things that kill devices, over voltage, negative voltage where it’s not expected, and over temperature. And of those three things, they all really boil down to over temperature, because over voltage usually causes a high amount of current to flow, and negative voltage usually causes current to flow where it shouldn’t flow, and too much current flowing in a given place causes too much heat. So power dissipation is critical.

And what we’re looking at here in this circuit is that power dissipation in linear regulators is very straightforward. You just subtract the output voltage from the input voltage and multiply that product by the output current. Now, when you want to calculate the worst case, and when we design power supplies, it’s pretty much always the worst case that we design to, subtract the lowest output voltage from the highest input voltage and multiply that by the maximum output current.

The Low Dropout Regulator, “LDO”

The next big thing in linear power supplies was the true low dropout regulator, which uses different ways of connecting bipolar transistors, or, for the lowest dropout, MOSFETs for the pass element. So rather than that minimum of 1.7 volts and typically more like 2.0 volts of dropout that the NPN Darlington would need, this circuit has a maximum dropout voltage just over 300 millivolts, as you can see on the graph here. And that’s at its maximum output current.

So, an important note here. This graph is at 25 degrees C. You can see it written there. And dropout voltage does change as temperature changes. An LDO delivering one amp of current will definitely heat up, so it’s very important to look at the worst case dropout over the full range of temperature.

Lower Dropout Reduces Power Loss

So the internal details of the p-MOSFET low dropout regulator make things a little bit clearer to see. This is the same device that we saw on the previous page. With only two MOSFET based elements, we can also see why only 300 millivolts or so of minimum head room are needed between V out and V in. So back to that 5 volt in and 3.3 volt out case. If we assume that the 5.0 volts has a plus or minus 10 percent tolerance, and that’s definitely a worst case for modern power supplies, then the voltage head room between five volts minus 10 percent, or 4.5 volts, and the 3.3 volt output would be 1.2 volts. That’s more than enough to keep this circuit regulating.

These days, it’s common for LDOs to drop from 2.5 volts down to 1.8 volts, even from 1.8 volts down to 1.2 volts. And there are some highly controlled cases where you might even be able to regulate from 1.5 volts down to 1.2 volts. So the trend in general is to reduce that dropout to a very minimum, and the purpose is to reduce power dissipation and unwanted heat. Heat is definitely the enemy in power supplies, and all electronics, really.

Packaging and Thermal Management

So we’ve got the basics of linear power supplies under our belt. Now, let’s talk more about heat. In marketing language, this is thermal management, but I prefer to say let’s not cook our power supplies.So, in general, the bigger the package, the lower the thermal resistance. And thermal resistance is very similar to electrical resistance. The higher it is, the harder it is to keep the junction, that’s the silicon at the center of the package, cool.

Now that long ago, the only packages available were limited to the leads for conducting heat away from the junction, whether they were through a hole or surface mount. The silicon usually sat on a copper based called the tab, and bond wires made of gold or aluminum made those electrical connections out to the pins. In many packages, those thin bond wires are also the only real conductors of heat, since the plastic of the body is just as poor at conducting heat as it is at conducting electrical current.

Silicon and package vendors often know and provide certain portions of the total thermal resistance from the junction to the ambient. That’s theta JA. One part they can report and reliably do is the thermal resistance thermal resistance from the junction to the solder point. That’s the theta JS. But in most cases, how the package is used has such a large impact on the total theta JA, that’s the junction to ambient, that the best the vendors can do is really provide us with some typical cases.

Experience, and by experience I mean burned chips and burned fingers, is often the best tool for power management. Still, it’s important to point out the basic equation. Final junction temperature is the function of ambient temperature plus the total power dissipation multiplied by that total thermal resistance from junction to ambient. At least for linear regulators, we know the power dissipation with good accuracy.

PCB Layout and Thermal Management

PCB layout is one critical parameter that the IC makers can’t control, so they provide various scenarios. Here we’re looking at the SC-74, a small package for an LDO. The so-called “minimum copper2 setup is basically just the recommended copper area of the footprint of the package plus a few thin traces. Copper thickness counts, too – though not written that’s 15 micrometers.

For contrast, look at the case where the copper area connected to the pins is greater than or equal to 300 square millimeters. To really get the most out of this copper area, it needs to connect to the pins that carry the highest current. That would be the Rext and OUT pins in this case, and that reduces the thetaJA by one-third. By the way, the fact that there are three OUT pins is a good indication that they carry heavy current and would be good for connecting copper areas to.

Exposed-Pad Package Advantage

One monumental advance in thermal management was the introduction of so called exposed pad packaging. Now, many of these packages are pin compatible with industry standard packages, like the SO-8, or, in the case shown here, the TSSOP-16. So in these cases, the silicon still sits on top of a copper pad, but that pad is larger and or it sits lower inside the package, and a portion of it is exposed on the bottom. The disadvantage, no more running traces underneath the IC on the top layer, but for the big power, the trade off is well worth it.

In many cases, it’s quite difficult to get a big area, like those 300 square millimeters we talked about on the previous page. It’s difficult to get that big area onto the top layer where all the traces go. So we use thermal vias and copper area on other layers.

Anyone who works with high power LEDs has certainly used or considered metal core PCB, or MCPCB. Now, exposed pad packages love MCPCB. We’ll talk about more about that later. For now, one very important comment. If you don’t connect the exposed pad of an exposed pad package, most power ICs will still work, but their thermal resistance is the same as a standard package, so if you have it and you don’t use it, you get nothing.

Now, as a final note, almost every aspect of thermal management shows an exponential response, like the graph you see here. Now, that means that while more is always better, there almost always comes a point of diminishing returns. In other words, connecting your exposed pad IC to a square meter of copper is fine, but you’re not getting much benefit from most of that copper. Did I mention, by the way, that copper isn’t cheap?

FR-4 vs. Metal-Core PCB

I mentioned LEDs on the previous slide, and power LEDs are the current kings of thermal management today because there’s such a direct relationship between heat control and both the quality and the quantity of the light output.

Now, I love LEDs, by the way. LED drivers are my favorite type of power supply. Anything that lights up or blinks. In any case, power LED manufacturers have done some excellent research into thermal management. Anything that works for a power LED, which always has a thermal tab, will definitely work for a power IC with a thermal tab.

And it goes without saying that metal core PCB is wonderful if you can use it. It used to be really, really expensive, but thanks to those high volumes in LED lighting, it’s a lot more affordable now. To me, the big disadvantage with MCPCB is still that it’s quite complicated and more expensive to have more than two layers of tracks on an MCPCB.

FR-4 with Thermal Vias

An FR-4 will never be as good at drawing heat away from your source as MCPCB, but there are plenty of ways to make big improvements. Thicker copper layers is one way, a simple fact of greater thermal mass. Now, I’ve seen designs with as much as 140 micrometers of copper, but if you put copper that thick on the out layers, the components don’t sit flat anymore and assembly starts to get really complicated. So adding internal ways is another way to improve thermal resistance, and as this slide here shows, thermal vias are pretty much the standard way to connect your heat source on an outer layer to heat sinking in the form of copper areas on the internal layers or on the opposite external layers.

Notice, again, that more or less exponential shape of the curves on this page. Now, here we’re looking at both the number of vias and the diameter of those vias. More is better, yes, but again, there’s a point of diminishing returns. There are also some important practical concerns. In most applications, for example, you won’t be able to fit 91 vias below an SO-8 package. While not shown here, the effectiveness of those thermal vias also drops off, and again exponentially, as they are placed farther and farther away from the heat source.

A final very important note for thermal vias is that while they are most effective when placed directly underneath the thermal tab of a power IC, this is a soldered area. Vias that are too big will draw away or drain or wick away the solder during assembly.

Now personally, I like to use vias that have an outer diameter of 0.5 millimeters and a hole diameter of 0.25 millimeters, and I typically space them one millimeter apart. The best thing to do really is to sit down with your PCB maker and your contract manufacturer and agree beforehand on what will work best.

What About Converting 24V to 1.2V @ 10A?

One of my colleagues at National Semiconductor, who was the guru of linear regulators, once told me about a customer who called him to ask for help designing a 10 kW linear power supply for a laser. Lasers are notoriously intolerant of ripple for their drivers, but you’d need a good size swimming pool to cool such a supply. He told them to use a switcher, but as far as I know, they never called back!

A slightly less powerful but still demonstrative example would be an industrial system with the very common bus voltage of 24V, powering a digital something that needed 1.2V. If the output current were 10A, you’d be dissipating about 230W, and for that you need a serious heatsink. You’d also be burning 230V for 12W output, and that’s an efficiency of 5%. Not something to be proud of!

Now, if I have the chance to update this seminar, I’ll see if I can build this circuit and make a video of marshmallows toasting…

Okay. So what do you do when you want to drop 24 volts to 1.2 volts at 10 amps and the marketing department tells you that marshmallow toasting is not a value added feature for your power supply? Well, the answer is that you use a switching regulator. In this presentation, we’re going to just dip our toes into the pool of switchers.

Switching to Control Average Power

Here’s a circuit and a plot showing the most basic part of a switching regulator. Now, I’ve written the source as V for voltage and shown the voltage across a load, but all we’re really doing here is connecting a source of power to that load for some period of time, t on, and not connecting that power source during the rest of the time. If we had a fixed period for each cycle of length t, then the time when the source wasn’t connected would be T minus t on.

The two most important concepts here are, one, the load sees an average power, or average voltage or average current, that is different from that of the source because they aren’t connected during 100 percent of the time. Two, it’s the duty cycle, meaning the percentage of time that the source and load are connected that determines the average power at the load. Now, for this circuit, duty cycle D is equal to t on divided by period T.

Some Applications Accept Pure PWM


Pulse width modulation, or PWM, is a type of control that consists of varying the time that the source and the load are connected. Another way to say this would be to state that the duty cycle is modulated. A higher duty cycle means a wider pulse.

Now, there are plenty of applications where those pulses of power are applied directly to the load with little or no filtering at all. Heaters are a good one, as well as DC fans. Then there’s my favorite, or to be more honest, my least favorite, the TRIAC or phase dimmer. That’s the simple circuit shown at the bottom left, and it uses purely analog components to cut off a portion of the AC line that feeds traditional filament or halogen light bulbs. The reason for my love hate relationship with TRIAC dimmers is that once you understand a little bit more about switching power supplies, you’ll see that TRIACs and switchers mix about as well as oil and water.

Now, the DC to DC switching converter, that’s at the bottom right, starts with this concept of PWM, but it adds a critical element, a filter. The purpose of the filter is to smooth out or average the pulses of voltage, current, or power for those that can’t naturally perform that averaging.

The Three-Terminal Element

Just about all of the converters described in this entire seminar are called quote on quote hard switching because they turn one switch on or off while there’s either a voltage across it or a current flowing through it. Interrupting the current or short circuiting the voltage is known as hard switching, and to be honest, it’s hard on the switches. What I mean to say is that hard switching causes power dissipation. Power dissipation causes heat. And now we know that heat is the number one killer of electronics. At least it is for industrial electronics. For consumer electronics, killer number one is still the toilet that your mobile phone falls into.

One great way to think of a hard switch converter is by using this very basic three terminal element. The square wave appears at the black dot in the middle, and our friend, the power inductor, is the principal element of the smoothing filter. It’s the output part of the inductor that determines the type, also known as the topology, of the switching converter.

While not shown here, it’s important to note that those two switches operate out of phase, meaning that only one of them is ever on at any given time. If for any reason they both turned on at the same time, something bad would happen. That’s what we call shoot through. More about that on the next slide.

The Boost Converter

The buck converter is the star of this show. Now, I can’t say this with absolutely certainty, but I’m pretty sure that the buck is the most common of all switching regulators, and it’s definitely the most common DC to DC converter. See how the inductor connects to the output? That means that the average output current is the same as the average inductor current. As we’ll see, inductor current is one of the most important waveforms in a switching converter.

The buck is a great place to get started with switchers, because we all recognize the second order LC filter, the output. V in and the two switches generate a high frequency square wave, and then the inductor and the output capacitor team up to filter, smooth, and average that output voltage. There will always be some ripple, but even a sensitive laser can be operated with a buck if you filter the output properly.

Here, it’s a lot easier to see why that shoot through I talked about on the previous slide is so destructive, both switches being on at the same time which short circuit the input voltage. Now sometimes, you just blow a fuse, but most DC to DC converters don’t have a fuse, and so the poor switches become unwilling fuses.

By the way, if you’re designing power electronics and you’re not blowing up parts, then you’re not trying hard enough.

Inductor Current

Look closely at this circuit. All we’ve really done is to rotate that three terminal element. Now, look at V out and V in. If they were reversed, this would be a buck. A boost converter, as the name implies, boosts the output voltage up to a level higher than the input voltage, and it’s nothing more than a buck converter in reverse.

When I first studied these converters, the buck made perfect sense to me. After all, it’s just a filter square wave. But I struggled with the boost at first. How could the output voltage go up? Well, the answer lies in the physics of the inductor, which is really the heart of any switching converter. Once you get a current flowing in an inductor, it is physically impossible to stop the magnetic field that accompanies that current instantaneously. A lot of people, papers, textbooks, and EP notes will all say it’s the current that can’t be stopped, but I prefer to think of the magnetic field.

Now, we’ll look at the boost converter in all of its glorious detail in a later section of the seminar, but for now, let’s say that the inductor can generate nearly any voltage needed to maintain that continuous magnetic field, and if we harness that ability, we can produce a higher output voltage than the input.

One final note. Thanks to connection of the inductor at the input, the average inductor current is the same as the average input current for a boost converter.

The Inverting Buck-boost Converter

One more turn of the three terminal element gives us the final basic DC to DC converter topology, the inverting buck-boost converter. I had a wonderful professor at university who taught the introduction to power electronics course, and he put a lot of enthusiasm into the name. He always said, “Buck-boost!”

What do I mean by inverting? Well, this is another case where I think many textbooks and EP notes are not specific enough. There are many topologies of buck-boost, because when you use just those two words, it simply means a converter whose output voltage can be either above or below the value of the input voltage.

And this converter also inverts the polarity of the output voltage with respect to ground. In fact, you can see that I drew the polar output capacitor to show this. This is a great secret to use if you want to power some bipolar op amps and need minus 5 volts or minus 15 volts.

Now, to be clear, it’s the absolute value of the output voltage that can be greater or less than the absolute value of the input voltage in this circuit. Once again, let’s take a look at where the inductor connects. Average inductor current is different from both the average input current and different from the average output current for inverting buck-boost converters.

Inductor Current


Looking in more detail at the heart of our basic switching converters, they all operate on the same basic principle. During the first portion of a cycle of length T, we use those switches to apply a given voltage across the inductor. This causes a current to flow. When the voltage applied is constant, the induced current increases linearly.

After a time period, t on, equal to duty cycle D multiplied by period T has elapsed, the switches change, and a voltage of opposing polarity is applied across the inductor. Now, this is not necessarily a negative voltage with respect to ground, just negative with respect to the voltage applied during the first portion of the cycle. There’s a balance, a so called volt second balance, meaning that the product of the voltage applied and the length of time it’s applied during the first portion of the cycle must be equal to the product of the voltage and the length of time applied during the second portion of the cycle. If those aren’t equal, then one of two things will happen. The converter output goes to zero or poof, the output tries to go to infinity.

As another old colleague of mine from NSC used to say, “You let the magic smoke out.”

A Real Picture of Inductor I and V

As good as my introductory class on power electronics was, we never looked at any actual waveforms. Now, I also admit that I just got a new differential voltage probe, and there’s nothing like a new toy to inspire.

Here we have a genuine buck converter, operating from an input of 12 volts, delivering an output of five volts, and delivering an output current of five amps. Channel one in yellow is the differential voltage across the inductor as shown in this schematic, and channel two in blue is the voltage at the switching node. That’s the black dot, remember, where those three parts of the three terminal element connect with respect to ground. And then finally, channel four in green is the inductor current.

The differential probe let me measure both the ground reference voltage with a standard probe and a floating voltage at the same time, something that’s otherwise impossible. Here’s the actual circuit, showing where we measured the different voltages and currents, and finally, a photograph of the actual setup itself. Seeing is believing.

Next Up: Section 1-2 – Three Basic Switching Topologies

In Section 1.2, I’ll dig into more detail with the three basic switching topologies, look at more practical switches, explore the differences between continuous and discontinuous conduction modes, and look at derivative and compound topologies, as well. Our first look at a common AC to DC topology will be with the flyback regulator. Finally, I’ll stray as far into marketing territory as I dare to go, explaining a bit about what can be found inside a switching regulator IC package and what is still located outside on the PCB.

That concludes Part 1-1, and I hope you learned something and that you come back to see the next session and future ones, as well. In Part 1-2, we’ll look at each of the three basic DC to DC converters in more detail, and also look at the flyback converter and some compound topologies.

Click here for Part 1-2 of our Power Supply Design Series



Power Supplies & Energy Storage Special Reports

No Comments

Join the conversation!

Error! Please fill all fields.