# Understanding the Theory of Everything: A Deep Dive into Quantum Mechanics and the Schrödinger Equation

Heads up!

This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.

Generate a summary for freeIf you found this summary useful, consider buying us a coffee. It would help us a lot!

## Introduction

In this comprehensive exploration of the theory of nearly everything, we delve deep into quantum mechanics, particularly focusing on the Schrödinger equation and its implications. The study of how things change with time in the quantum realm allows us to connect the fundamental laws of motion from classical mechanics to their deeper quantum theories. This piece aims to clarify these concepts and explore the relationship between classical and quantum physics, leading to a better understanding of the universe at a fundamental level.

## The Theory of Everything

The theory that attempts to explain all known physical phenomena is often referred to as the theory of everything. In this context, **quantum mechanics** provides essential insights into the behavior of particles at the microscopic scale, and the **Schrödinger equation** serves as its cornerstone.

### What is the Schrödinger Equation?

The Schrödinger equation is the foundational equation of quantum mechanics, analogous to Newton's second law in classical mechanics (F = ma). It describes how the quantum state of a physical system changes over time and is written mathematically as:

[i\hbar\frac{\partial \psi(x,t)}{\partial t} = \hat{H}\psi(x,t)]

In this equation:

- (\psi(x,t)) represents the wave function of the system.
- (\hat{H}) is the Hamiltonian operator, which corresponds to the total energy of the system.
- (\hbar) is the reduced Planck's constant.

The wave function encapsulates all the information about a quantum system's state, and its square gives the probability density of finding a particle at a particular position and time.

## Classical vs. Quantum Mechanics

While classical mechanics can be understood through Newton's laws (for example, acceleration depends on applied force), quantum mechanics introduces complexities:

- In classical mechanics, knowing a particle's position and momentum at a given time allows for complete predictability of its future behavior.
- In quantum mechanics, however, outcomes are probabilistic; thus, determining with certainty where a particle is located requires examining its wave function.

### The Role of Information in Quantum Mechanics

In quantum mechanics, the maximal information about a system is given by its **position** and **momentum**. All other quantities, such as kinetic and potential energy, are derived from these values. This results in a fundamentally different view of reality:

- The future behavior of a quantum system is governed by probability distributions derived from the wave function rather than deterministic trajectories.

## Understanding the Wave Function

The wave function (\psi(x)) plays a pivotal role in quantum mechanics. It is not just a mathematical tool; it represents the physical state of a system and encodes all measurable information.

### Probability Density and Wave Function

The probability density of finding a particle at position x is given by (\left| \psi(x) \right|^2). This means:

- A higher probability density indicates a higher likelihood of observing the particle at that position.
- Conversely, low probability densities indicate unfavorable outcomes.
- Furthermore, due to the probabilistic nature of quantum mechanics, one can never predict exact outcomes, only probabilities.

## The Time Evolution of the Wave Function

Understanding how the wave function evolves over time is crucial:

- Given an initial wave function (\psi(x,0)), the Schrödinger equation allows us to compute (\psi(x,t)) for subsequent times.
- Specifically, the time evolution involves applying the Hamiltonian to the wave function, resulting in:

[\psi(x,t) = \psi(x,0)e^{-i\hat{H}t/\hbar}]

This evolution reflects how quantum states change, introducing the concept of **stationary states**—states where physical properties do not change over time despite the wave function evolving.

## The Importance of Stationary States

### Measuring Position and Momentum

Stationary states are particularly significant because:

- If a particle is in such a state, the probability of finding it in a specific location does not change over time. This is a crucial concept.
- When measuring the momentum of a stationary state, the results are determined solely by the coefficients of the wave function, independent of time.

### Quantum Fluctuations

Even in what we consider vacuum states, quantum mechanics permits fluctuations. Such fluctuations mean:

- An isolated system in stationary states may still exhibit dynamics when perturbed, showing that under certain conditions (like external electromagnetic fields), the quantum state can transition between energy levels, explained by absorbing or emitting photons.

## Quantum Superposition and Measurement

The **superposition principle** in quantum mechanics states that:

- Any linear combination of valid wave functions is also a valid wave function. Thus, [\psi(x,0) = A_1\psi_1(x) + A_2\psi_2(x)]
- When a measurement is made, the system ‘collapses’ to one of the possible states defined by the coefficients of the superposition.

## Conclusion

The exploration of the Schrödinger equation and its implications for the theory of everything sheds light on the fundamental nature of the universe. By understanding the mathematical structure behind quantum mechanics and concepts like wave functions, probability densities, and superpositions, we gain profound insights into the predictable yet perplexing nature of particles at fundamental levels. This continued synthesis of classical and quantum mechanics drives our understanding forward, ultimately contributing to the broader narrative of the universe's laws.

Prof: All right, today's topic is the theory of nearly everything, okay? You wanted to know the theory of everything?

You're almost there, because I'm finally ready to reveal to you the laws of quantum dynamics that tells you how things change with time.

So that's the analog of F = ma. That's called the Schrˆdinger equation, and just about anything you see in this room,

or on this planet, anything you can see or use is really described by this equation I'm going to write down today.

It contains Newton's laws as part of it, because if you can do the quantum theory, you can always find hidden in it the classical theory.

That's like saying if I can do Einstein's relativistic kinematics at low velocities, I will regain Newtonian mechanics.

So everything is contained in this one. There are some things left, of course, that we won't do, but this goes a long way.

So I'll talk about it probably next time near the end, depending on how much time there is. But without further ado, I will now tell you what the

laws of motion are in quantum mechanics. So let's go back one more time to remember what we have done. The analogous statement is, in classical mechanics for a

particle moving in one dimension, all I need to know about it right now is the position and the momentum.

That's it. That's the maximal information. You can say, "What about other things?

What about total energy?" They're all functions of x and p. For example, in 3 dimensions, x will

be replaced by r, p will be replaced by some vector p, and there's a variable called angular momentum,

And you can say, "What happens when I measure any variable for a classical particle in this state, x,p?"

Well, if you know the location, it's guaranteed to be x 100 percent, momentum is p, 100 percent.

So everything is completely known. That's the situation at one time. Then you want to say, "What can you say about

the future? What's the rate of change of these things?" And the answer to that is, d^(2)x/dt^(2) times

m is the force, and in most problems you can write the force as a derivative of some potential. So if you knew the potential, 1/2kx^(2) or whatever it

is, or mgx, you can take the derivative on the right hand side, and the left hand side tells you the rate of change of

x. I want you to note one thing - we know an equation that tells you something about the acceleration.

Once the forces are known, there's a unique acceleration. So you are free to give the particle any position you like, and any velocity, dx/dt.

That's essentially the momentum. You can pick them at random. But you cannot pick the acceleration at random,

because the acceleration is not for you to decide. The acceleration is determined by Newton's laws to equal the essentially the force divided by mass.

That comes from the fact mathematically that this is a second order equation in time, namely involving the second derivative.

And that, from a mathematical point of view, if the second derivative is determined by external considerations, initial conditions are given by

initial x and the first derivative. All higher derivatives are slaved to the applied force. You don't assign them as you wish.

You find out what they are from the equations of motion. That's really all of classical mechanics. Now you want to do quantum mechanics, and we have seen many

times the story in quantum mechanics is a little more complicated. You ask a simple question and you get a very long answer.

The simple question is, how do you describe the particle in quantum mechanics in one dimension? And you say, "I want to assign to it a

function Y(x)." Y(x) is any reasonable function which can be squared and integrated over the real line.

Anything you write down is a possible state. That's like saying, any x and any p are allowed.

Likewise, Y(x) is nothing special. It can be whatever you like as long as you can square it and integrate it to get a finite answer over all of space.

That's the only condition. And if your all of space goes to infinity, then Y should vanish a and- infinity.

That's the only requirement. Then you say, "That tells me everything. Why don't you tell me what the particle is doing?"

That's when you don't get a straight answer. You are told, "Well, it can be here, it can be there, it can be anywhere else.

And the probability density that it's at point x is proportional to the absolute square of Y." That means you take the Y and you square it,

so it will have nothing negative in it. Everything will be real and positive, and Y itself may be complex.

But this Y^(2), I told you over and over, is defined to be Y*Y. That's real and positive.

Then you can say, "What if I measure momentum, what answer will I get?" That's even longer.

First you are supposed to expand--I'm not going to do the whole thing too many times-- you're supposed to write this Y as some coefficient times these very special

functions. In a world of size L, you have to write the given Y in this fashion, and the coefficients are

determined by the integral of the complex conjugate of this function times the function you gave me, Y(x). Now I gave some extra notes, I think.

Did people get that? Called the "Quantum Cookbook?" That's just the recipe, you know.

Quantum mechanics is a big fat recipe and that's all we can do. I tell you, you do this, you get these answers. That's my whole goal, to simply give you the recipe.

So the recipe says--what's interesting about quantum mechanics, what makes it hard to teach, is that there are some physical principles which are summarized

by these rules, which are like axioms. Then there are some purely mathematical results which are not axioms.

They are consequences of pure mathematics. You have to keep in mind, what is a purely mathematical result, therefore is deduced from the

laws of mathematics, and what is a physical result, that's deduced from experiment. The fact that Y describes everything is a

physical result. Now it tells you to write Y as the sum of these functions, and then the probability to

obtain any momentum p is A_p^(2), where A_p is defined by this. The mathematics comes in in the following way - first question

is, who told you that I can write every function Y in this fashion? That's called the Fourier's theorem,

that guarantees you that in a circle of size L, every periodic function, meaning that returns to its starting value, may be expanded in terms of

these functions. That's the mathematical result. The same mathematical result also tells you how to find the

coefficients. The postulates of quantum mechanics tell you two things. A_p^(2) is the probability that you will get

the value p when you measure momentum, okay? That's a postulate, because you could have written

this function 200 years before quantum mechanics. It will still be true, but this function did not have a meaning at that time as states of definite momentum.

How do I know it's a state of definite momentum? If every term vanished except one term, that's all you have, then one coefficient will be

A something = 1, everything is 0, that means the probability for getting momentum has a non 0 value only for that momentum.

All other momenta are missing in that situation. Another postulate of quantum mechanics is that once you measure momentum and you get one of these values,

the state will go from being a sum over many such functions, and collapse to the one term and the sum that corresponds to the answer you got.

Then here is another mathematical result - p is not every arbitrary real number you can imagine. We make the requirement, if you go around on a circle,

the function should come back to the starting value, therefore p is restricted to be 2p‚Ñè /L times some

integer n. That's a mathematical requirement, because if you think Y ^(2) is the probability, Y should

come back to where you start. It cannot get two different values of Y when you go around the circle.

That quantizes momentum to these values. The last thing I did was to say, if you measure energy, what answer will you get?

That's even longer. There you're supposed to solve the following equations, (((h^(2)/2m)d^(2) Y_E

complicated, because before, I can tell you anything. I want you to solve this equation. This equation says, if in classical mechanics the

particle was in some potential V(x) and the particle had some mass m, you have to solve this equation, then it's a purely mathematical problem,

and you try to find all solutions that behave well at infinity, that don't blow up at infinity, that vanish at infinity.

That quantizes E to certain special values. And there are corresponding functions, Y_E for each allowed value.

Then you are done, because then you make a similar expansion, you write the unknown Y and namely some arbitrary Y that's given to you.

Just replace p by E and replace this function by these functions. Then if you square A_E you will

get the probability you will get the energy. So what makes the energy problem more complicated is that whereas for momentum we know once and for all these functions

describe a state of definite momentum where you can get only one answer, states of definite energy depend on what potential is acting on the particle.

If it's a free particle, V is 0. If it's a particle that in Newtonian mechanics is a harmonic oscillator, V(x) would be

ÔøΩkx^(2) and so on. So you should know the classical potential and you've got it in for every possible potential you have to solve

They're solving this equation to find states of definite energy. So today, I'm going to tell you why states of definite energy

Why is the state of definite position not so interesting? What is privileged about the states of definite momentum? And now you will see the role of energy.

So I'm going to write down for you the equation that's analog of F = ma. So what are we trying to do?

Y(x) is like x and p. You don't have a time label here. These are like saying at some time a particle has a position

But the real question in classical mechanics is, how does x vary with time and how does p vary with time.

First thing you've got to do is to realize that Y itself can be a function of time, right? That's the only time you've got to ask, "What does it do

So it's flopping and moving, just like say a string. It's changing with time and you want to know how it changes with time.

So this is the great Schrˆdinger equation. It says i‚Ñèd Y(x,t)/dt (it's partial, because this depends on

x and t so this is the t derivative) = the following, [-‚Ñè^(2) /2m d^(2)Y/dx^(2) V(x) Y(x,t)].

equation, you'll be surprised how many things you can calculate. From this follows the spectrum of the atoms,

from this follows what makes a material a conductor, a semiconductor, a superconductor. Everything follows from this famous Schrˆdinger

equation. This is an equation in which you must notice that we're dealing for the first time with functions of time.

Somebody asked me long back, "Where is time?" Well, here is how Y varies with time. So suppose someone says, "Here is initial

Well, it's the rate of change of Y with time multiplied by 1 millisecond. The rate of change of Y at the initial time is obtained

by taking that derivative of Y and adding to it V times Y, you get something. That's how much Y changes.

Multiply by Dt, that's the change in Y. That'll give you Y at a later time. This is the first order equation in time.

What that means mathematically is, the initial Y determines the future completely. This is different from position where you need x and

dx/dt are the initial time. The equation will only tell you what d^(2)x/dt^(2)^( )is. But in quantum mechanics, dY /dt

itself is determined, so you don't get to choose that. You just get to choose the initial Y.

That means an initial wave function completely determines the future according to this equation. So don't worry about this equation.

I don't expect you all to see it and immediately know what to do, but I want you to know that there is an equation. That is known now.

That's the analog of F = ma. If you solve this equation, you can predict the future to the extent allowed by quantum mechanics.

Given the present, and the present means Y(x,0) is given, then you go to the math department and say,

It turns out there is a trick by which you can predict Y(x) and t. Note also that this number i is present in the very equations of motion.

So this is not like the i we used in electrical circuits where we really meant sines and cosines, but we took e^(t)^(q)or

e^(i)^(w) ^(t), always hoping in the end to take the real part of the answer because the functions of classical mechanics are always

real. But in quantum theory, Y is intrinsically complex and it cannot get more complex than that by putting an

i in the equations of motion, but that's just the way it is. You need the i to write the equations.

Therefore our goal then is to learn different ways in which we can solve this. Now remember, everybody noticed,

this looks kind of familiar here, this combination. It's up there somewhere. It looks a lot like this.

this is working on a function of x and t. And there are partial derivatives here and there are total derivatives there.

They are very privileged functions. They describe states of definite energy. This is an arbitrary function, just evolving with time,

so you should not mix the two up. This Y is a generic Y changing with time. So let's ask, how can I calculate the future,

I'm going to do it at two levels. One is to tell you a little bit about how you get there, and for those of you who say, "Look,

spare me the details, I just want to know the answer," I will draw a box around the answer, and you are free to start from

there. But I want to give everyone a chance to look under the hood and see what's happening.

So given an equation like this, which is pretty old stuff in mathematical physics from after Newton's time, people always ask the following question.

They say, "Look, I don't know if I can solve it for every imaginable initial condition." It's like saying, even the case of the

oscillator, you may not be able to solve every initial condition, you say, "Let me find a special case where Y(x, t),

which depends on x and on t, has the following simple form - it's a function of t alone times a function of x alone."

Okay? I want you to know that no one tells you that every solution to the equation has this form.

All right, so this is an assumption. You want to see if maybe there are answers like this to the problem.

The only way to do that is to take their assumed form, put it into the equation and see if you can find a solution of this form.

That's the function of x and t. But it's not a function of x times the function of t, you see that?

x and t are mixed up together. You cannot rip it out at the two parts, so it's not the most general thing that can happen; it's a particular one.

Right now, you are eager to get any solution. You want to say, "Can I do anything? Can I calculate even in the simplest case what the future

is, given the present?" You're asking, "Can this happen?" So I'm going to show you now that the equation does admit

solutions of this type. So are you guys with me now on what I'm trying to do? I'm trying to see if this equation admits solutions of

this form. So let's take that and put it here. Now here's where you've got to do the math, okay?

Take this Y and put it here and start taking derivatives. Let's do the left hand side first.

Left hand side, I have i‚Ñè. Then I bring the d by dt to act on this product. d by dt partial means only time has to be differentiated;

x is to be held constant. That's the partial derivative. That's the meaning of the partial derivative.

It's like an ordinary derivative where the only variable you'd ever differentiate is the one in the derivative.

So you just put that Y(x) there. Then the derivative, d by dt, comes here, and I claim it becomes the ordinary derivative.

That's the left hand side. You understand that, why that is true? Because on a function only of time there's no difference

between partial derivative and ordinary derivative. It's got only one variable. The other potential variable this d by dt,

doesn't care about so it's just standing there. That's the left hand side. Now look at the right hand side, all of this stuff,

and imagine putting for this function Y this product form. Is it clear to you, in the right hand side the

situation's exactly the opposite? You've got all these d by dx's partials, they're only interested in this function because it's got

but take your time to understand this. The reason you write it as a product of two functions is the left hand side is only interested in differentiating

the function F, where it becomes the total derivative. The right hand side is only taking derivatives with respect

to x so it acts on this part of the function, it depends on x. And all partial derivatives become total derivatives because

if you've got only one variable, there's no need to write partial derivatives. This combination I'm going to write to save some time as

HY. Let's just say between you and me it's a shorthand. HY is a shorthand for this entire mess here.

Don't ask me why it looks like H times Y, where are the derivatives? It is a shorthand, okay.

I don't feel like writing the combination over and over, I can call it HY. So what equation do I have now?

It has no dependence on time. Do you see that? There's nothing here that depends on time.

Okay, now this is a trick which if you learned, you'll be quite pleased, because you'll find that as you do more and more stuff, at least in physics or

economics or statistics, the trick is a very old trick. The problem is quantum mechanics, but the mathematics is very old.

What do you think will happen if I divide by F(t), Y(x)? On the left hand side, I say divide by

F(t))dF/dt = 1/Y(x)H Y. I've written this very slowly because I don't know, you'll find this in many

advanced books, but you may not find it in our textbook. So if you don't follow something, you should tell me.

There's plenty of time to do this, so I'm in no rush at all. These are purely mathematical manipulations. We have not done anything involving physics.

Now you have to ask yourself the following. I love this argument. Even if you don't follow this, I'm just going to get it off my

chest, it is so clever, and here is the clever part - this is supposedly a function of time, you agree?

All for a function of time. This is a function of x. This guy doesn't know what time it is;

this guy doesn't know what x is. And yet they're supposed to be equal. What can they be equal to?

They cannot be equal to a function of time, because then, as you vary time--suppose you think it's a function of time, suppose.

It's not so. Then as time varies, this part is okay. It can vary with time to match that, but this cannot vary with

time at all, because there is no time here. So this cannot depend on time. And it cannot depend on x, because if it was a

function of x that it was equal to, as you vary x, this can change with x to keep up with that.

This has no x dependence. It cannot vary with x. So this thing that they are both equal to is not a function

of time and it's not a function of space. It's a constant. That's all it can be.

So the constant is going to very cleverly be given the symbol E. We're going to call the constant E.

It turns out E is connected to the energy of the problem. So now I have two equations, this = E and that =

So let me bring the F here. Other one says H Y = E Y. These two equations, if you solve them,

will give you the solution you are looking for. In other words, going back here, yes, this equation does admit

solutions of this form, of the product form, provided the function F you put in the product that depends on time obeys this equation,

and the function Y that depends only on x obeys this equation. Remember, HY is the shorthand for this long

bunch of derivatives. We'll come to that in a moment, but let's solve this equation first.

Now can you guys do this in your head? i‚ÑèdF/dt = EF. So it's saying F is a function of time whose time

derivative is the function itself. Everybody knows what such a function is. It's an exponential.

And the answer is, I'm going to write it down and you can check, F(t) is F(0) e^(‚àíiEt/√¢¬Ñ¬è). If you want now,

this goes away and F(t) = F(0). But take the time derivative and see. When you take a time derivative of this,

you get the same thing times ‚àíiE/√¢¬Ñ¬è, and when you multiply by i‚Ñè, everything cancels except EF.

So this is a very easy solution. So let's stop and understand. It says that if you are looking for solutions that are products

of F(t) times f(x), F(t) necessarily is this exponential function, which is the only function you can have.

But now once you pick that E, you can pick E to be whatever you like, but then you must also solve this equation at the same time.

Prof: It's the state of definite energy. Remember, we said functions of definite energy obey that equation.

So that Y is really just Y_E. So now I'll put these two pieces together, and this is where those of you who drifted off can come back,

because what I'm telling you is that the Schrˆdinger equation in fact admits a certain solution which is a product of a function of time and a function of space.

And what we found by fiddling around with it is that F(t) and Y are very special, and F(t) must look like e^(‚àíiEt/

‚Ñè), and Y is just our friend, Y _E(x), which are functions associated with a definite energy.

Yes? Student: Is it possible that there are other solutions, x/Y(x, t) that don't satisfy the

conditions? Prof: Okay, the question is, are there other solutions for which this factorized form is

But I want everyone to understand that you can at least solve one case of Schrˆdinger's equation. So what does this mean?

I want you guys to think about it. This says if you start Y in some arbitrary configuration, that's my initial state, let it evolve with time,

it obeys this rather crazy, complicated Schrˆdinger equation. But if I start it at t = 0, in a state which is a state

All you do is attach this phase factor, e^(‚àíiEt/ ‚Ñè). Therefore it's not a generic solution, because you may not in

general start with a state which is a state of definite energy. You'll start with some random Y(x) and it's made up of many, many Y_E's

that come in the expansion of that Y, so it's not going to always work. But if you picked it so that there's only one such term in

the sum over E, namely one such function, then the future is given by this. For example, if you have a particle in a

box, you remember the wave function Y_n looks like square root of 2/Lsine(n px/L). An arbitrary Y doesn't look

But if you chose it initially to be exactly the sine function, for example, Y_1, then I claim as time evolves, the future state is just this

initial sine function times this simple exponential. This behavior is very special and it called normal modes. It's a very common idea in mathematical physics.

It's the following - it's very familiar even before you did quantum mechanics. Take a string tied at both ends and you pluck the string and you

release it. Most probably if you pluck it at one point, you'll probably pull it in the middle and let it go.

There's an equation that determines the evolutions of that string. I remind you what that equation is.

It's d^(2)Y /dx^(2)=(1/veloc ity^(2))d^(2)Y /dt^(2). That's the wave equation for a string.

It's somewhat different from this problem, because it's a second derivative in time that's involved.

Nonetheless, here is an amazing property of this equation, derived by similar methods. If you pull a string like this and let it go,

it will go crazy when you release it. I don't even know what it will do. It will do all kinds of things, stuff will go back and forth,

back and forth. But if you can deform the string at t = 0 to look exactly like this, sine(px/L)

times a number A, that's not easy to do. Do you understand that to produce the initial profile, one hand is not enough, two hands are not enough?

You've got to get infinite number of your friends, who are infinitesimally small. You line them up along the string until each one--tell the

person here to pull it exactly this height. Person here has to pull it exactly that height. You all lift your hands up, then I follow you.

this goes away, this initial state. But look what happens at a later time. Every point x rises and falls with the same period.

It goes up and down all together. That means a little later it will look like that, a little later it will look like that,

then it will look like this, then it will look like this, then it will look like that, then it will go back and forth. But at every instant, this guy, this guy,

this guy are all rescaled by the same amount from the starting value. That's called a normal mode.

Typically, if you don't think about it and pluck the string, your initial state will be a sum of these normal modes, and that will evolve in a complicated way.

But if you engineered it to begin exactly this way, or in any one of those other functions where you put an extra n here, they all have the remarkable

property that they rise and fall in step. What we have found here is in the quantum problem, if you start the system in that particular configuration,

then its future has got a single time dependence common to it. That's the meaning of the factorized solution.

So we know one simple example. Take a particle in a box. If it's in the lowest energy state or ground state the wave

That's the energy associated with that function. That's how it will oscillate. Now you guys follow what I said now, with an analogy with the

string and the quantum problem? They're slightly different equations. One is second order, one is first order.

One has cosines in it, one has exponentials in it. But the common property is, this is also a function of time times the function of space.

Here, this is a function of time and a function of space. Okay, so I'm going to spend some time analyzing this particular function.

Y(x) and t = e^(‚àíiEt/√¢¬Ñ¬è)^( )Y _E(x). And I'm going to do an abuse in rotation and give the subscript

E to this guy also. What I mean to tell you by that is, this Y, which solves Schrˆdinger equation--by the way,

I invite you to go check it. Take the Y, put it into Schrˆdinger equation and you will find it works.

In the notes I've given you, I merely tell you that this is a solution to Schrˆdinger's equation. I don't go through this argument of assuming it's a

but this solves Schrˆdinger equation, and I call it Y_E because the function of the right hand side are identified with states of

For example, what's the position going to be? What's the probability for definite position?

What's the probability for definite momentum? What's the probability for definite anything? How will they vary with time?

I will show you, nothing depends on time. You can say, "How can nothing depend on time?

I see time in the function here." But it will go away. Let us ask, what is the probability that the particle is

at x at time t for a solution like this? You know, the answer is Y *(x,t) Y(x,t), and what do you get when you do

will get the absolute value of this guy squared, e^(iEt)^( )times e^(-iEt), and that's just 1.

I hope all of you know that this e^(i)^( )absolute value squared is 1. So it does not depend on time.

Even though Y depends on time, Y *Y has no time dependence. That means the probability for finding that particle will not

change with time. That means if you start the particle in the ground state, Y, and let's say Y^(2) in fact looks

pretty much the same, it's a real function, this probability does not change with time. That means you can make a measurement any time you want

It depends on time and it doesn't depend on time. It's a lot like e^(ipx/‚Ñè). It seems to depend on x but the density does not depend

on x because the exponential goes away. Similarly, it does depend on time. Without the time dependence, it won't satisfy

Schrˆdinger equation, but the minute you take the absolute value, this goes away. That means for this particle, I can draw a little graph that

looks like this, and that is the probability cloud you find in all the textbooks. Have you seen the probability cloud?

They've got a little atom that's a little fuzzy stuff all around it. They are the states of the hydrogen atom or some other

dimensions instead of 1 dimension, and for V(x), you write -Ze^(2)/r, if you want r as x^(2) y^(2) z^(2).

Ze is the nuclear charge, and -e is the electron charge. You put that in and you solve the equation and you will find a

whole bunch of solutions that behave like this. They are called stationary states, because in that stationary state-- see, if a hydrogen atom starts

out in this state, which is a state of definite energy, as time goes by, nothing happens to it essentially.

Something trivial happens; it picks up the phase factor, but the probability for finding the electron never changes with time.

So if you like, you can draw a little cloud whose thickness, if you like, measures the probability for finding it at that location.

So that will have all kinds of shape. It looks like dumbbells, pointing to the north pole, south pole, maybe uniformly spherical distribution.

They're all the probability of finding the electron in that state, and it doesn't change with time. So a hydrogen atom, when you leave it alone,

will be in one of these allowed states. You don't need the hydrogen atom; this particle in a box is a good enough quantum system.

If you start it like that, it will stay like that; if you start it like that, it will stay like that, times that phase factor.

So stationary states are important, because that's where things have settled down. Okay, now you should also realize that that's not a

typical situation. Suppose you have in 1 dimension, there's a particle on a hill, and at t = 0, it's given by some wave

function that looks like this. So it's got some average position, and if you expand it in terms of exponentials of p, it's got some range of

Let's say it's right now got an average momentum to the left. What do you think will happen to it? Pardon me?

Student: It will move to the left. Prof: It will move to the left. Except for the fuzziness, you can apply your classical

intuition. It's got some position, maybe not precise. It's got some momentum, maybe not precise,

but when you leave something on top of a hill, it's going to slide down the hill. The average x is going to go this way,

and the average momentum will increase. So that's a situation where the average of the physical quantities change with time.

That's because this state is not a function Y_E(x). It's some random function you picked.

Random functions you picked in some potential will in fact evolve with time in such a way that measurable quantities will change with time.

So stationary states are very privileged, because if you start them that way, they stay that way, and that's why when you look at

atoms, they typically stay that way. But once in a while, an atom will jump from one stationary state to another one, and you can say that looks like

You know the answer to that? Why does an atom ever change then? If it's in a state of definite E, it should be that way

And what I really mean by that is, this problem V(x) involves only the coulomb force between the electron and the proton.

If that's all you have, an electron in the field of a proton, it will pick one of these levels, it can stay there forever.

When you shine light, you're applying an electromagnetic field. The electric field and magnetic field apply extra forces in the

charge and V(x) should change to something else. So that this function is no longer a state of definite energy for the new problem, because you've changed the

rules of the game. You modified the potential. Then of course it will move around and it will change from

one state to another. But an isolated atom will remain that way forever. But it turns out even that's not exactly correct.

You can take an isolated atom, in the first excited state of hydrogen, you come back a short time later, you'll find the fellow has come down.

but you need an extra thing, extra electrical magnetic field to act on it before it will emit the photon. But where is the field?

I've turned everything off. E and B are both 0. So it turns out that the state E = B = 0 is like

a state say in a harmonic oscillator potential x = p = 0, sitting at the bottom of the well.

We know that's not allowed in quantum mechanics. You cannot have definite x and definite p. It turns out in quantum theory, E and B are like

A state of definite B is not a state of definite E. It looks that way in the macroscopic world,

because the fluctuations in E and B are very small. Therefore, just like in the lowest energy state,

an oscillator has got some probability to be jiggling back and forth in x and also in p. The vacuum, which we think has no E and no B,

has small fluctuations, because E and B both vanishing is like x and p both vanishing. Not allowed.

So you've got to have a little spread in both E and both B. They're called quantum fluctuations of the vacuum.

So that's a theory of nothing. The vacuum you think is the most uninteresting thing, and yet it is not completely uninteresting,

because it's got these fluctuations. It's those fluctuations that tickle the atom and make it come from an excited state to a ground state.

Okay, so unless you tamper with the atom in some fashion, it will remain in a stationary state. Those states are states of definite energy.

They are found by solving the Schrˆdinger equation without time in it. H Y = E Y is called the time

independent Schrˆdinger equation, and that's what most of us do most of the time. The problem can be more complicated.

It can involve two particles, can involve ten particles. It may not involve this force, may involve another force, but everybody is spending most of his time or her time solving

the Schrˆdinger equation to find states of definite energy, because that's where things will end up. All right, I've only shown you that the probability to find

different positions doesn't change with time. I will show you the probability to find different anything doesn't change with time.

Nothing will change with time, not just x probability, so I'll do one more example. Let's ask, what's the probability to find a momentum

‚Ñè )times the function at some time t and do that integral. I'm sorry, you should take that and do that integral,

and then you take the absolute value of that. And that's done at every time. You take the absolute value and that's the probability to get

momentum p, right? The recipe was, if you want the probability, take the given function, multiply to the conjugate of

You can pull it outside the integral--or let me put it another way. Let's do the integral and see what you get.

Do you see that? If the only thing that happens to Y is that you get an extra factor at later times, only thing that happens to the

A_p is it gets the extra factor at later times. But the probability to find momentum p is the absolute value square of that, and in the absolute value

process, this guy is gone. You follow that? Since the wave function changes by a simple phase factor,

the coefficient to have a definite momentum also changes by the same phase factor.This is called a phase factor, exponential of modulus 1, but when you take the absolute

value, the guy doesn't do anything. Now you can replace p by some other variable. It doesn't matter.

The story is always the same. So a state of definite energy seems to evolve in time, because e^(ipx)^(/ ‚Ñè),

but none of the probabilities change with time. It's absolutely stationary. Just put anything you measure.

That's why those states are very important. Okay, now I want to caution you that not every solution looks like this.

That's the question you raised. I'm going to answer that question now. Let's imagine that I find two solutions to the

This function has all the properties I mentioned, namely nothing depends on time. That has the same property.

But because the Schrˆdinger equation is a linear equation, it's also true that this Y, which is Y_1

Y_2, add this one to this one, is also a solution. I think I have done it many, many times.

If you take a linear equation, Y_1 obeys the Schrˆdinger equation, Y_2 obeys the Schrˆdinger equation.

Add the left hand side to the left hand side and right hand side to the right hand side, you will find that if Y_1 obeys it and Y_2 does,

Can you see that? That's superposition of solutions. It's a property of linear equations.

Nowhere does Y^(2) appear in the Schrˆdinger equation, therefore you can add solutions. But take a solution of this form.

F_2 and Y_2, the sum is not a product of some F and some Y. You cannot write it as a product, you understand?

That's a product, and that's a product, but their sum is not a product, because you cannot pull out a common function of time from the two of them.

They have different time dependence. But that is also a solution. In fact, now you can ask yourself, what is the most

general solution I can build in this problem? Well, I think you can imagine that I can now write Y(x) and t as A_EY

Do you agree? Every term in it satisfies Schrˆdinger equation. You add them all up, multiply by any constant

A_E, that also satisfies Schrˆdinger equation. So now I'm suddenly manufacturing more complicated

solutions. The original modest goal was to find a product form, but once you got the product form, you find if you add them

together, you get a solution that's no longer a product of x and a product of t, function of x and a function of t,

because this guy has one time dependence; another term is a different time dependence. You cannot pull them all out.

So we are now manufacturing solutions that don't look like their products. This is the amazing thing about solving the linear equation.

You seem to have very modest goals when you start with a product form, but in the end, you find that you can make up a linear combination of products.

Then the only question is, will it cover every possible situation you give me? In other words, suppose you come to me with an

arbitrary initial state. I don't know anything about it, and you say, "What is its future?"

Can I handle that problem? And the answer is, I can, and I'll tell you why that is true.

Y(x) and t looks like A_E. I'm going to write this more explicitly as Y_E(x)e^(- ipx/‚Ñè).

In other words, I can only handle those problems whose initial state looks like this. But my question is, should I feel limited in any

way by the restriction? Do you follow what I'm saying? Maybe I'll say it one more time.

This is the most general solution I'm able to manufacture that looks like this, A_E Y_E(x)e^(- ipx/‚Ñè).

It's a sum over solution to the product form with variable, each one with a different coefficient. That's also a solution to Schrˆdinger equation.

If I take that solution and say, "What does it do at t = 0?" I find it does the following.

At t = 0, it looks like this. So only for initial functions of this form I have the future. But the only is not a big only, because every function you can

give me at t = 0 can always be written in this form. It's a mathematical result that says that just like sines and cosines and certain exponentials are a complete set of functions

for expanding any function. The mathematical theory tells you that the solutions of H Y = E Y, if you assemble all of them,

can be used to build up an arbitrary initial function. That means any initial function you give me, I can write this way, and the future of that initial state is this guy.

Lots of mathematical restrictions, single valued. Physicists usually don't worry about those restrictions till of

course they get in trouble. Then we go crawling back to the math guys to help us out. So just about anything you can write down, by the way physics

works, things tend to be continuous and differentiable. That's the way natural things are. So for any function we can think of it is true.

You go the mathematicians, they will give you a function that is nowhere continuous, nowhere differentiable, nowhere defined, nowhere something.

That's what makes them really happy. But they are all functions the way they've defined it, but they don't happen in real life,

because whatever happens here influences what happens on either side of it, so things don't change in a discontinuous way.

Unless you apply an infinite force, an infinite potential, infinite something, everything has got what's called C infinity,

can differentiate any number of times. So we don't worry about the restriction. So in the world of physicists' functions, you can write any

initial function in terms of these functions. So let me tell you then the process for solving the Schrˆdinger equation under any conditions.

Initial state, final state. This is given, this is needed. So I'll give you a 3 step solution.

that you got times e^(-ipx/ ‚Ñè)Y _E(x). So what I'm telling you is, the fate of a function Y

with the wiggles and jiggles is very complicated to explain. Some wiggle goes into some other wiggle that goes into some other wiggle as a function of time,

but there is a basic simplicity underlying that evolution. The simplicity is the following. If at t = 0 you expand your Y as such a sum,

where the coefficients are given by the standard rule, then as time goes away from t = 0, all you need to do is to multiply each coefficient by the

particular term involving that particular energy. And that gives you the Y at later times. A state of definite energy in this jargon will be the one in

That state has got only 1 term in the sum and its time evolution is simply given by this and all probabilities are constant.

But if you mix them up with different coefficients, you can then handle any initial condition. So we have now solved really for the future of any quantum

mechanical problem. So I'm going to give you from now to the end of class concrete examples of this.

But I don't mind again answering your questions, because it's very hard for me to put myself in your place. So I'm trying to remember when I did not know quantum

mechanics, sitting in some sandbox and some kid was throwing sand in my face. So I don't know.

I've lost my innocence and I don't know how it looks to you. Yes. Student: For each of these problems,

Prof: Right. So let's do the following problem. Let us take a world in which everything is inside the box of

length L. And someone has manufactured for you a certain state. Let me come to that case in a minute.

Let me take a simple case then I'll build up the situation you want. Let's first take a simple case where at t = 0

You agree, that's a state of definite energy. The energy of that state, E_n, is ‚Ñè^(2) p^(2)n^(2)/

these functions? Now I can tell you why. If this is my initial state, let me take a particular

n, then the state at any future time, Y(x,t), is very simple here, the square root of 2/Lsine(n

All I've done to that initial state is multiply by e^(-iEt), but E is not some random number.

E is labeled by n and E_nis whatever you have here. That's the time dependence of that state.

It's very clear that if you took the absolute value of this Y, this guy has absolute value = 1 at all times. It's like saying cos t depends on time,

sine t depends on time, but cos^(2) sine^(2), cos^(2 )t sine^(2) t seems to depend on time, but it doesn't.

So this seems to depend on time and it does, but when you take the absolute value, it goes away. That's the simplest problem.

I gave you an initial state, the future is very simple, attach the factor. Now let's give you a slightly more complicated state.

The more complicated state will be--I'm going to hide that for now. Let us take a Y(x,0) that looks

What does it look like? It's a sum of 2 energy states. This guy is what I would call Y_2 in my

notation, the second highest state. This guy is Y_3. Everybody is properly normalized, and these are the

Anybody tell me what answers I can get if I measure energy now? You want to guess what are the possible energies I could get? Yes.

Prof: So her answer was, you can get in my convention E_2or E_3, just put n = to 2 0r 3 That's all you have,

your function written as a sum over Y_E is only 2 terms. That means they are the only 2 energies you can get.

So it's not a state of definite energy. You can get either this answer or this answer. But now you can sort of see, it's more likely to get this

guy, because it has a 4 in front of it, and less likely to get this guy, and impossible to get anything else.

So the probability for getting n = 2 is proportional to 3^(2), and the probability for getting n = 3 is proportional to 4^(2).

If you want the absolute probabilities, then you can write it as 3^(2) divided by 3^(2) 4^(2), which is 4^(2), or you can write it as 3^(2)

If you want to get 1, I think you can see without too much trouble, if you rescale the whole thing by 1 fifth, now you'll find the total

probabilities add up to 1. That's the way to normalize the function, that's the easy way. The hard way is to square all of this and integrate it and

then set it = to 1 and see what you have to do. In the end, all you will have to do is divide by 5. I'm just giving you a shortcut.

When you expand the Y in terms of normalized functions, then the coefficient squared should add up to 1. If they don't, you just divide them by

But as a function of time, you will find here things vary with time. This is not going to be time independent.

)Now you notice that if I found the probability to be at sum x, p(x,t), I have to take the absolute square of all of this.

And all I want you to notice is that the absolute square of all of this, you cannot drop these exponentials now. If you've got two of them, you cannot drop them,

you multiply it by Y_1 conjugate Y_2 conjugate, let's do that. So you want to multiply the whole thing by its conjugate.

root--2/Lsin^(2)(3px/L) times 1, because the absolute value of this guy with itself is 1. But that's not the end.