I also experimented around with all kinds of devices, and through the use of mathematics, did not have to spend as much time on some of my ideas, because the numbers didnt support the theory.
It seems to me that it is crucial to have a mathematical adept person involved to be able to save time, effort, and money on many "ideas" since they can run the numbers to see if it even has a chance.
Good points, Kerr. And let me share one of the most difficult aspects of what I do for a living (control systems) with regard to this idea of how much time is involved with a trial and error approach... and if it is even feasible. Of course, I know Einstein will not accept it at all, but then again has he ever designed any kind of closed-loop, dynamical control system? However, others may glean something from it.
It is fairly easy to understand the high-level basics of a closed-loop control system. I have a physical object or process that I wish to control (we call it "the plant") and this plant has certain physical dynamics. Quantifying those physical plant dynamics in terms of its response to varying frequencies of command stimuli is a whole problem unto itself, and one which would take years by trial and error if you did not use math. Guaranteed. But for the sake of showing how hard it is just to design the control algorithms, let's say we already know the dynamics of how the plant responds to any type of time-varying command.
The high level of the controller structure seems deceivingly simple:
1) I want the plant to achieve a specific value for a specific state...we call that value the COMMAND. For an example, I want an airplane to hold a specific altitude (altitude is the state, the value of altitude I want it to hold is the command). Hence, let us call this the
altitude command.
2) I use some form of device that observes the current state of the plant that I wish to control. On aircraft we use a barometric altimeter (a presure sensing device) to determine the actual altitude of the airplane (the plant) above the ground via understanding characteristics of how pressure varies in the atmosphere with respect to altitude. We call this measurement of the actual plant state FEEDBACK. So for the airplane, the barometric altimeter provides us with
altitude feedback.
3) We define the difference between the state I want the airplane to be in (COMMAND) and the state that the airplane is currently in (FEEDBACK) as being the ERROR. In this case, the difference between the desired altitude of the airplane and the current altitude of the airplane is called the
altitude error.
Already we see we cannot escape mathematics, for the equation to compute the altitude error is:
Altitude Error = Altitude Command - Altitude Feedback
Granted, this is very simple math... and that is what makes the problem to come so much more difficult, because these concepts are pretty damned simple.
So I wish to command the plant (the airplane) to do something to make this error become zero (or as close to zero as I can get). The
altitude error signal is the artifact I will use as the basis for what we call the "control law" that will command the airplane to make the error approach zero.
But now I have the first problem: The device used to control the airplane's altitude is the elevator and it moves in an angular displacement on a hinge on the tail of the airplane. So we measure elevator displacement in terms of its deflection in degrees. But the error signal I am measuring is in terms of feet. How do I convert (properly!) so that the altitude error in feet becomes an elevator command in degrees? The answer is a conversion factor that, in the controls world, we call the "gain". (Things are already getting more complex, but wait until you see what lies ahead!).
So if I multiply the altitude error by some gain, whose units would be [degrees/foot] then at least we know how to go from feet to degrees.
Elevator Command (Degrees) = Altitude Error Gain (Degrees/Foot) * Altitude Error (Feet)
But now how do I know what the right value is for this gain? That is an entire mathematical problem in itself that I will not go into here. Suffice it to say it takes time to figure out even if you use math. To figure it out via trial and error will take a lot more time.
We call the above the "proportional control command", because the command of the elevator is propotional to the altitude error. The constant of proportionality is the Altitude Error Gain. Many might think "great! I am now done with my control law design." Wrong. If you were to only implement a proportional control command for altitude control in an airplane, you would very quickly find that the performance of the airplane would be pretty sloppy over most of the flight envelope (altitude and airspeed) range that the airplane could fly.
Using trial and error you MAY have gotten lucky enough to find the right gain for ONE combination of altitude and airspeed. But it would be guaranteed this gain would not work for all combinations. At other altitudes and airspeeds we might see the airplane oscillate above and below the desired altitude over time...sometimes with LARGE oscillations (maybe even hundreds of feet if the gain is REALLY off). This response is a sign that the gain is too high, which induces the oscillations. Other times we may see airplane seem to be perfectly "happy" to hold an altitude that is (for example) 100 feet below the desired altitude with no attempt to correct it. This is a sign the gain is too low. So how do we fix this?
Well, the classical way to fix it is to add MORE control paths in addition to the simple proportional command. This is where calculus comes in handy. This leads to a classical control scheme called a Proportional-Integral-Derivative (PID) controller. We have already talked about the proportional component, so what are the other two?
The "INTEGRAL control command" is where we take the Altitude Error signal and run it through a calculus integrator (basically it is continuously summing the errors over time). The units that go into this integrator are (same as before) in terms of feet of error in holding altitude. But the output units of this are in feet*seconds (because an integrator operates over time...in seconds). This integrating process will "amplify" the error signal as a means to fix the problem where the airplane constantly holds an altitude too high above or too low below the desired altitude (i.e. the problem where the proportional gain is too low). Suffice it to say, we will also need to apply a gain on the integral control command to make sure the output of the integrator commands the elevator in degrees*seconds and not feet*seconds. Another task to find the right gain!!
The "DERIVATIVE control command" does the opposite of the integral command. It runs the altitude error signal through a rate-taker (it takes the calculus derivative of the error). The output of this process is in units of feet/second (as opposed to feet*seconds with the integrator). This control command path is intended to fix the other problem we talked about before...oscillations about the desired altitude. By selecting the right "derivative gain" for the altitude error signal, we can tame down (dampen out) the oscillations. That is why the derivative control command is also sometimes referred to as the "damping command".
So now we get to the punchline... we have a design structure that will actually achieve altitude hold. Now all we have to do is find the right values for the three gains: Proportional Gain, Integral Gain, and Derivative Gain. Sounds easy, right? Far from it. But there are actually people who can come to understand everything I have just explained, and still think "well, all I have to do is run an airplane simulation over and over, and use trial and error of setting these gains until I get perfect control performance." Each aerospace engineering controls student is actually FORCED to try this "trial and error" method with a simple aircraft simulation, and a simple PID control scheme. The task I give my students is, within one week, to come back and tell me what the gains should be to get good altitude hold performance over the entire flight envelope, using only trial and error. To-date, NO ONE has ever been able to succeed...and one class asked for another week....still couldn't do it.
That is when we introduce the students to the complex mathematics of LaPlace Transforms and frequency domain analysis. We use this difficult (but valuable) mathematics to analyze the dynamics of the plant and the controller as a whole. Using this mathematics process, an engineer can arrive at the proper gains for the PID controller in a matter of hours.
Even if Einstein was not willing to take Darby's entire airplane challenge, I would enjoy just humbling him a little with ONLY a single-axis control law like this. I would even be willing to GIVE HIM a Simulink model of an airplane, and then let him build a controller all on his own through trial and error.... heck, I would even be willing to give him a PID control law structure, and leave ONLY the task of finding the right gains to him via trial and error. I can guarantee everyone that Einstein would not be able to find the correct gains through trial and error, even if I gave him MONTHS to do so.
This has been a little window into my world. Anyone (like Einstein) who claims that you don't need math for difficult problems like this, and that trial and error will get you where you want to go, is hopelessly uninformed and clueless. And I am willing to prove it to him.
RMT