I struggled a bit with whose picture I should attach to this article. I finally decided on Lord Kelvin. There were many scientists during the nineteenth century that contributed to the concepts of energy and the conservation of energy, but Lord Kelvin is both the most distinguished and I want to prepare for the use of a quotation of his to introduce the next chapter in the story. That’s the chapter after this one. But first, I’ll introduce Lord Kelvin more thoroughly, and then get back to this chapter on Energy.
William Thomson, 1st Baron of Kelvin, holder of the Order of Merit, Royal Victorian Order (knight GCVO), Her Majesty's Most Honorable Privy Council, President of the Royal Society, and member of the Royal Society of Edinburgh, was born in 1824 and died in 1907. He was a Belfast-born, British mathematical physicist and engineer.
At the University of Glasgow he did important work in the mathematical analysis of electricity and formulation of the first and second laws of thermodynamics, and did much to unify the emerging discipline of physics in its modern form. He worked closely with mathematics professor Hugh Blackburn in his work. He also had a career as an electric telegraph engineer and inventor, which propelled him into the public eye and ensured his wealth, fame, and honor.
For his work on the transatlantic telegraph project he was knighted by Queen Victoria, becoming Sir William Thomson or Lord Kelvin. He had extensive maritime interests and was most noted for his work on the mariner's compass, which had previously been limited in reliability.
Lord Kelvin is widely known for determining the correct value of absolute zero as approximately −273.15 Celsius. A lower limit to temperature was known prior to Lord Kelvin, as shown in "Reflections on the Motive Power of Heat", Sadi Carnot, written before Lord Kelvin's birth in 1824. "Reflections" used −267 as the absolute zero temperature. Absolute temperatures are stated in units of kelvin in his honor. (Note that the “K” is shown without the term or symbol for degree in modern usage.)
Getting back to energy, first of all, there is energy in motion. The larger the mass and the speed of a moving object, the greater its “kinetic energy.” Energy due to the motion of objects is kinetic energy. The adjective "kinetic" has its roots in the Greek word κίνηση (kinesis) meaning "motion" — the same root as in the word cinema (referring to motion pictures). In Classical Physics the term “Momentum” refers to straight line motion precisely termed linear momentum or translational momentum.
Recall that Newton’s second equation was F = ma, where both force and acceleration are vector quantities. Since momentum, symbol “p,” is the right side of the equation, this can be written as F = p, both vectors. That is actually the way Newton wrote the second law in Prinicipia.
Considering gravity, the farther a rock falls, the faster it goes and the greater its kinetic energy. A rock held at a certain height has the potential of gaining a certain speed. It has a gravitational “potential energy,” which is larger for a larger mass or a greater height. The sum of a rock’s kinetic and potential energy, its total energy, remains constant as the rock falls. This is an example of the conservation of energy.
Of course, after the rock hits the ground, it has zero kinetic energy and zero potential energy. As it contacts the ground, the energy of the rock itself was not conserved. But the total energy is conserved. On impact, the rock’s energy is given to random motion of the atoms of the ground and those of the rock. These atoms now jiggle about with greater agitation. The haphazard motion of these atoms is the microscopic description of thermal energy or “heat.” Where the rock hit the ground, the ground is warmer. The energy imparted to the jiggling atoms is just equal to the energy the rock lost on impact. To be more precise, some of the energy may be converted to sound which is vibration of atoms in the air and ground and we can hear that.
When giant space rocks hit the ground … we call those “meteors” … the energy conversion can melt the ground where it hits and create shockwaves, which are giant sound waves, that can knock over trees. But the total energy remains constant.
Although the total energy is conserved when the rock stops, the energy available for use decreases. The kinetic energy of falling rocks, or falling water, could, for example, be used to turn a wheel. But once the energy goes over to the random motion of atoms, it is unavailable to us except as thermal energy. Moreovoer, the second law of thermodynamics tells us that in any action some energy becomes unavailable. When we’re enjoined for environmental reasons to “conserve energy,” we’re being asked to conserve available energy.
The entire science of “Thermodynamics,” of which I’ve quoted two laws, is about getting useful work from energy and how some energy is always lost. That is why there can not be any “perpetual motion machines,” systems that run forever without the input of additional energy.
Quoting from Wikipedia:
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, stating that for a gas at constant temperature, its pressure and volume are inversely proportional. In 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time.
The concepts of heat capacity and latent heat, which were necessary for development of thermodynamics, were developed by professor Joseph Black at the University of Glasgow, where James Watt worked as an instrument maker. Watt consulted with Black on tests of his steam engine, but it was Watt who conceived the idea of the external condenser, greatly raising the steam engine's efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The paper outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
From 1873 to '76, the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being "On the equilibrium of heterogeneous substances.” Gibbs showed how thermodynamic processes, including chemical reactions, could be graphically analyzed. By studying the energy, entropy, volume, chemical potential, temperature and pressure of the thermodynamic system, one can determine if a process would occur spontaneously. Chemical thermodynamics was further developed by Pierre Duhem, Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim, who applied the mathematical methods of Gibbs.
Getting back to our falling rock, there is only one kind of kinetic energy, but there are many kinds of potential energy. The energy of that rock held at some height is gravitational potential energy. A compressed spring or stretched rubber band has elastic potential energy. The elastic energy of the spring can be converted to kinetic energy in projecting a rock upward.
When a positive and a negative electrical charge are held apart from each other, those charges have electrical potential energy (measured in volts). If released, they would fly toward each other with increasing speed and kinetic energy. In an atom, the electrons orbiting the nucleus have both kinetic and potential energy.
The chemical energy of a bottle of hydrogen and oxygen molecules is greater than the energy those molecules would have if they were bound together as water at the same temperature. Should a spark ignite that hydrogen-oxygen mixture, the greater energy would appear as kinetic energy of the resulting water molecules. The water vapor would therefore, be hot. The chemical energy stored in the hydrogen-oxygen mixture would have become thermal energy.
That is what gives us the heat and light of a campfire. The wood and other combustibles, mostly carbon, combine with the oxygen in the air and give off energy as heat and light. The resulting ash and carbon dioxide have a lower chemical potential energy than the original substances, and the loss of chemical energy is matched by the increase in thermal energy and light energy. But the total is conserved.
Nuclear energy is analogous to chemical energy, except that the forces involved between the protons and neutrons that make up the nucleus include the nuclear (strong and weak) forces as well as electrical forces. A uranium nucleus has a great total energy than do the fission products it breaks into. That greater energy becomes the kinetic energy of the fission products. That kinetic energy is thermal energy and can be used in a nuclear reactor to make steam to turn turbines that turn generators to produce electrical power. It can also be a bomb.
When light is emitted from a glowing hot body, energy goes to the electromagnetic radiation field, and the glowing body cools, unless it is supplied with additional energy. When a single atom emits light, it goes to a state of lower energy.
The law of conservation of energy, first formulated in the nineteenth century, is a law of physics. It states that the total amount of energy in an isolated system remains constant over time. The total energy is said to be conserved over time. For an isolated system, this law means that energy is localized and can change its location within the system, and that it can change form within the system, for instance chemical energy can become kinetic energy, but that it can be neither created nor destroyed.
The formula, then is a simple E = K + V, where E is the total energy of the system and K is the kinetic energy and V is the potential energy. We don’t use “P” for potential energy since P stands for “momentum.” (I know, why not use “M” for momentum … well m = mass.) I’m not sure where the “V” came from, but I suspect it comes from “Volt,” which is the measure of electric potential or “voltage.”
The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, which states every continuous symmetry of a physical theory has an associated conserved quantity; if the theory's symmetry is time invariance then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se." In other words, if the physical system is invariant under the continuous symmetry of time translation (that is fancy talk for “passing time”) then its energy is conserved.
Conversely, systems which are not invariant under shifts in time (an example, systems with time dependent potential energy) do not exhibit conservation of energy — unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time invariant again. Since any time-varying system can be embedded within a larger time-invariant system, conservation can always be recovered by a suitable re-definition of what energy is.
For example, we might assume that the rock lost its potential and kinetic energy when it hits the ground and energy is not conserved. But if we add the ground and the air around the rock to our system, we see the energy was translated as heat and the total energy was conserved.
The concept of time invariance implies that the rules of physics are the same now as they were yesterday, last week, and last year. We know that the rules of science have not changed with time because we observe the light from stars so distant that they left their home billions of years ago and that light demonstrates the laws of physics on the suns that were the source are the same as our sun today. So we don’t just take it on faith that the laws of physics don’t change with time. We know from experiment and observation.
Therefore these rules are symmetrical in time and therefore, by Noether's theorem, energy is conserved.
How many forms of energy are there? That depends on how you count. Chemical energy, for example, is ultimately electrical energy, though it is usually convenient to classify it separately. There may be forms of energy we don’t yet know about. Just a few years ago it was discovered that the expansion of the universe is not slowing down, as was generally believed — it’s accelerating. The vast amount of energy causing this acceleration has a name, “dark energy,” but there is still more mystery about it than understanding.
Thermodynamics and a true understanding of the properties of heat was a major accomplishment of the nineteenth century. These conservation of energy laws led to alternative methods for solving problems in the motion of object, something now called “Classical Mechanics.” These new methods provided alternatives to using Newton’s laws of motion and not only gave better methods for calculation in some circumstances, but also increased the understanding of just how these mechanical processes worked … a better understanding of Nature. All this work was based on the foundation in mathematics from the eighteenth century. It was during the one hundred years after Newton and Leibniz started the revolution that many minds and many hands added to the structure and power of this mathematical method for dealing with change.
Many of the formulas and theorems from that century, along with the names of the discoverers were transformed into a better understanding of physics via application of the conservation of energy principle. This all leads to a culmination at the very end of the nineteenth century with hints of the changes about to come in such work as Maxwell’s. So the stage has been set and the players are in the wings, ready to triumphantly enter and start this new revolution that we are still expanding today.
There are actually two, twin ideas that are about to hatch from the egg produced by the two hundred years of work post Newton. These ideas are separate yet intertwined, both in the result and the discoverers. But they are separate ideas and some times one idea is only an approximation until the other idea can be applied. We’re about to enter the modern world of physics … a world much changed by those discoveries … the world of today … a world of technology … a world that we translate from the world of “steam” to the world of “STEM” (or, as I prefer, STEAMD).