None
You'll need to use a process to help guide decisions about what is needed and what might not be working to develop a controller for a system. The process can be extended from your system modeling experience in ME352. For both system modeling and controller design you need to develop an expectation of what will happen and then compare that to what does happen.
"Scoping" a mathematical model for a system means deciding what its inputs and outputs should be, and what degree of complexity the model should have. If it is a physics-based model, this complexity is often dictated by how many unique elements of the system you will model. If the model is purely mathematical, the complexity of the model may simply be dictated by the order and linearity of the differential equation(s) you choose to use to fit the model's behavior.
In this step, a model is actually constructed to predict the behavior of your system. For empirical, purely mathematical models, this can involve "fitting" an equation to data. For physics-based models, this can involve combining mathematical models of individual components to produce an equation or set of equations that represents the system's behavior based on known or measurable physical quantities.
However, to keep this model as simple as possible we could say "we assume the time it takes to transition between sleeping and awake is short enough that we aren't interested in the details of the transition". This means that we would only practically be interested in whether the brain was "sleeping" or "awake" as the transition would be so short it wouldn't be of interest.
This type of model is very common and allows you to represent different "situations" as Boolean States. A FINITE STATE MACHINE is a model that is always in a defined Boolean State and the time to transition between the states is neglected. While not everything can be accurately represented by a Finite State Machine model, they are very common in robotics and industrial control systems.
Each of the "Boolean States" in a finite state machine can take only one of two values:
If you use a finite state machine to model your system then two assumptions are required
Note that we are using the term "Boolean State" to distinguish from the "state" you're familiar with from system dynamics used in "state space" models.
A state transition diagram is a design tool for FSMs that allows you to organize how your system will move from one state to another. It consists of bubbles representing each Boolean state, and arrows that show how the system is permitted to move between those Boolean states. A generic example is shown below-- note that it is not possible for the system to move from state 3 to state 1!
In general, it takes some kind of "stimulus" to cause a state machine to leave one Boolean state and go to the other. In our brain model the stimulus that brings you from "Sleeping" to "Awake" might be an alarm going off for your 8AM class. The transition from "Awake" to "Asleep" might be described by the condition that "you're tired and it's past your bedtime."
There are two types of stimuli that we will deal with in ME480:
Inputs are often things like switches, buttons, and control panels on a robot or machine. "Timers" in an FSM program keep track of how long it's been since some event has occurred, and "Counters" keep track of how many times something has happened. You'll learn about those in upcoming notebooks.
State transition tables are like a "key" that is used to read a state transition diagram. Each row represents a transition on the state transition diagram.
Transition | Starting State | Condition | Ending State |
---|---|---|---|
A | Sleeping | Alarm rings | Awake |
B | Awake | Bedtime and I'm tired | Sleeping |
Once this table is constructed, you can describe each entire transition from one state to another by reading across the table. For example, for transition B, we have:
"Transition B occurs when the state is 'Awake' and 'Bedtime and I'm tired' occurs."
Note that state transition tables are typically written using mathematical shorthand rather than full sentences. We will learn how to do this in a subsequent notebook.
In a FSM, what is considered an "output" is up to the engineer. Usually, an output is something that the machine uses to interact with its environment or its user... something observable to the outside world. Examples could be lights that indicate machine operation, signals that power motors or pumps, or text displayed on a user interface. One way to think about states is that they are "unique collections of system outputs."
What would the outputs be in our brain model?
The tools you need to design the gate-intersection system are same ones needed to control robots, AI characters in a video game, traffic lights, microwaves, digital watches, etc. Each of these machines and systems share the need to have a control system that makes a "decision" about how it will behave.
In this course, we will generally consider synchronous state machines, which basically means that their operation is governed by a clock signal executing one command after the other at a specified time. Generally, a synchronous state machine implemented in software consists of an "infinite" program loop that runs over and over. In each "loop" of the program, inputs are measured, logical conditions are evaluated to determine what state the machine should be in based on the user inputs and the machine's prior state. Then, at the end of each loop, outputs are written (think motors, lights, buzzers, etc.) based on what state the machine is in.
There are other types of state machines! For more information on types of state machine (this will not be on a test!) you can refer to this link or this one.
Because the state machines we will be implementing in this course are synchronous and running on an infinite loop, they need to make a decision about which state to be in every time through the loop. The decision has to be made regardless of whether anything of note has happened, and the software we write to implement a FSM must set each boolean state variable to either true or false on each pass through the loop.
It's possible to intuit the concept of a Boolean variable. Here are some example manifestations:
False | True |
---|---|
0 | 1 |
low | high |
energized | de-energized |
open | closed |
All of these words are equivalent. In some programming languages (like Arduino), you can use "0" or "false" or "LOW" interchangeably to represent "false," and "1," "true," or "HIGH" to represent true. Many languages, however, are more restrictive. MATLAB (or its open-source equivalent, Octave), for instance, insists that Boolean variables are assigned a value of either "true" or "false" or "1" or "0," respectively.
The building blocks of "Boolean Expressions" are the basic logical operators "OR" "AND" and "NOT." A Boolean expression can be thought of as a statement of equivalence comprised of Boolean (true/false) variables,
The "or" operator is often written as a "+" sign. Example: "C is true if A or B is true" can be written:
\begin{equation} C = A+B \end{equation}In ME480 our simulators and programming languages use "||" to represent "or" so the expression above would be coded as:
C = A||B;
The "and" operator is often written as a dot, $\cdot$, or ommitted entirely. Example: "C is true if A and B are true" can be written:
\begin{equation} C=A\cdot B \end{equation}In some circles (including this class), the $\cdot$ AND symbol can be omitted so the expression $Y=A\cdot B$ is the same as the expression $Y=AB$.
ME480 simulators and the programming languages used in the course use "&&" to represent "and" so the expression above would be coded as:
C = A&&B;
The "not" operator is often written as a bar over an existing variable, e.g. $\overline{A}$. For example, the statement: "C is true if A is not true" can be written:
\begin{equation} C=\overline{A} \end{equation}However, most programming languages do not allow you to use a $\overline{}$ sign to represent "not." In ME480, our simulators and hardware use languages that represent the "not" operator with a ! character. MATLAB (and Octave) are a bit different-- they represent "not" by either a tilde (~) or the word "not()." In an Arduino or ME480 simulator program, however, defining the Boolean variable "C" to be true if "A" is false, and false if A is true, we could write the following line of code:
ME480 simulators and Arduino use "!()" to represent "not" so the expression above would be coded as:
C = !(A);
MATLAB and Octive use "~" to represent "not" so the expression above would be coded as:
C = ~A;
The table below shows a summary of characters used for Boolean operators in different programming languages. It also includes the symbols use in other disciplines like philosophy.
Note that for "NOT," we include a "." to show that we are negating something. The "." is not part of the symbol(s).
Language | OR | AND | NOT(.) | |
---|---|---|---|---|
ME480 | $+$ | $\cdot$ | $\overline{.}$ | |
MATLAB | $ | $ | $\&$ | ~.,"not(.)" |
Python | $|$, "and" | $\&\&$, "or" | ~., "not" . | |
C,C++,Java,Arduino | $|$ | $\&\&$ | !. | |
Philosophy, Logic | $\lor$ | $\land$ | $\neg .$ |
Formally, the order of operations for Boolean Algebra is:
Note: The "NOT" operator, denoted by the overline $\overline{.}$, has implied parentheses that span its length. Parentheses are used to group terms the same way as an overbar can be used, so parentheses and the "NOT" operator have equal precedence.
Much the same as it is in arithmetic and/or algebra. the "AND" ($\cdot$) operator takes precedent over the "OR" ($+$) operator, and parentheses can be used to group portions of an expression.
When we evaluate a Boolean expression, it's often necessary to look at all possible combination of inputs (variables on the RHS of the expression) and evaluate their corresponding output (assigned variable on the LHS of the equation) to understand the implications of subjecting the expression to all possible combinations of inputs. To do this, we use a tool called the Truth Table. For example, the truth table for the expression $C=A\cdot B$ is given below:
$$A$$ | $$B$$ | $$C=A\cdot B$$ |
---|---|---|
0 | 1 | 0 |
1 | 0 | 0 |
0 | 0 | 0 |
1 | 1 | 1 |
Note that I needed to exhaust all possible combinations of the inputs in order to construct the table. As the number of inputs grows, so does the number of combinations you must test! In fact, there are $2^n$ combinations, where $n$ is the number of inputs in the expression!
Even with this complexity, truth tables are useful for testing more complex expressions "brute force." A good first example of this is the "XOR" or "exclusive OR" operator (shown as $\oplus$). This is like an "OR" operator but returns false if both A and B are true. By definition, the "XOR" operator is defined according to the following statement:
$$A\oplus B = \left(A+B \right)\cdot\overline{A\cdot B} $$The procedure for constructing a truth table is simple. Create a column for each variable in the Boolean expression, and then create columns for intermediate steps in computing the expression according to the order of operations. Make certain that you exhaust all possibilities for combinations of inputs, or else the truth table will not be "complete."
$A$ | $B$ | $A \oplus B$ | $A+B$ | $AB$ | $\overline{AB}$ | $\left(A\right. + \left.B \right) \cdot \overline{A\cdot B}$ |
---|---|---|---|---|---|---|
0 (false) | 0 (false) | 0 (false) | 0 (false) | 0 (false) | 1 (true) | 0 (false) |
1 (true) | 0 (false) | 1 (true) | 1 (true) | 0 (false) | 1 (true) | 1 (true) |
0 (false) | 1 (true) | 1 (true) | 1 (true) | 0 (false) | 1 (true) | 1 (true) |
1 (true) | 1 (true) | 0 (false) | 1 (true) | 1 (true) | 0 (false) | 0 (false) |
............... | ............... | ............... | ............... | ............... | ............... | ................................... |
The intermediate columns in the table above make evaluating the expression easier by grouping terms.
It is possible to evaluate any Boolean expression using a truth table, but it is long and tedious! Using a few simplification rules presented below make the job becomes much easier.
The AND operator can be distributed into an OR expression (just like multiplication into addition!)
\begin{equation} A\cdot \left(B+C\right)=\left(A\cdot B\right)+\left(A\cdot C\right) \end{equation}DeMorgan's Laws, or the Laws of Negation, can be used to simplify long, complex Boolean expressions. It would be good practice to prove these using hand-written truth tables! Committing these to memory is actually quite usefule as they show up a lot in practical expressions common in logic-based programming and control (e.g. FSM design/implementation).
The first of DeMorgan's laws is one we like to call "neither!" It explains how to say that "neither A nor B can be true." \begin{equation} \overline{A+B}=\overline{A}\cdot \overline{B} \end{equation}
The second of DeMorgan's laws is one we like to call "Not this, or not that." It describes how to say that an expression is true if either A is false, or if B is false. Note that this is different than the case above, which says that neither A nor B can be true if the expression is to return true.
\begin{equation} \overline{A\cdot B} = \overline{A}+\overline{B} \end{equation}Making use of these rules can help you simplify complex logic. Consider the following Boolean algebra expression.
\begin{equation} Y=A\left(\overline{A+B}\cdot B\right)+B \end{equation}The truth table for this would be epic. However, if we make the simplification for $\overline{A+B}$, we see that we get:
\begin{equation} Y=A\left(\overline{A}\cdot\overline{B}\cdot B\right)+B \end{equation}$B$ and $\overline{B}$ can never return true (B cannot be both true and false at the same time) so the whole parenthetical expression disappears, leaving us with $Y=B$.
One possible way to organize a FSM implementation (and the way that is required in ME480) is by separating the code into four blocks, which are given below. Using this structure makes it very straightforward to turn your state transition diagram into a functioning FSM implementation. The structure is given below.
Robots, factory assembly lines, CNC machines, handheld electronic games, HVAC systems, aircraft cockpits, and other automatic control systems whose operation can be supervised by a state machine usually have some way to interact with a human operator. Types of input devices are widely varied, but momentary switches and "latching" switches (toggle or locking switches like a light switch) are most common. In your laboratory assignment this week you have seen (or will see) that a switch can be used to generate a "high" voltage or "low" voltage. The outputs of these switches are the most basic type of Boolean input for a state machine design.
Block 1 is also where the program needs to take any "raw" inputs from the user and process/translate them for use in the state transition logic. For example, if your design requires you to hold a button for 2 seconds, you may need to write some logic that uses a "timer" (which we will cover later) to check how long a certain momentary button has been held.
This portion of the program is the most critical to implement correctly, but can be the easiest to implement if you have carefully considered the design of your state transition diagram.
To write the code for block 2, simply write a single line of code representing to the boolean expression for each transition in your state transition table without including the ending state. This means that in block 2, you should have as many lines of code as there are transitions in your state transition table. This is a simple but extremely powerful way to make a quick self-assessment of the reasonableness of your block 2 code.
This section of code will finally "decide" which of our states will be true for this loop of the program. We look to the diagram to determine how to use the state transitions to set our current state. Specifically, we look at each state in our diagram and count the number of arrow heads pointing at it. Each arrow, as we know, represents one of the transition variables we defined in block 2. So all that is required now is to write the logic to say that a state is true if any of the state transitions ending in that state are true.
Functionally, this might manifest for a single state as:
State_x = T1||T2||T3;
where T1, T2, and T3 are the all state transition variables with an ending state of State_x. Let's write block 4 for our two-state program example.
This (final) part of your Boolean Algebra program is fairly self-explanatory. Now that I know which state my machine is in, I'll use that information to activate any outputs (lights, sounds, motors, pumps) that are associated with this state. Because I'm "finished" with all of the program's tasks, I'll store the current state of relevant variables in an "OLD" variable so I will be able to access in the next loop of the program after inputs have been updated. In this case, we just need to store the "OLD" value of SW1 so that we can detect unique presses.
In a properly coded FSM, it is necessary to somehow initialize one of the design's states to true when the program first starts up. This is vital because all of the transitions in a state transition diagram depend on the context of the machine already being in a particular state.
In many applications where a FSM is a good choice for high-level program operation, you'll find yourself needing to design your state machines' transitions too use a few common stimuli. For instance: if you need to create a system that "waits" for a specified period of time after a button is pressed, or requires that a button is pressed for a particular length of time to "make sure" that the user meant to enter a particular machine state, you'll likely need to implement (or use) some kind of timer so that the stimulus of "wait for a specified amount of time" can be implemented in your state transition diagram and chart.
Timers are extremely common in finite state machine design. They behave in the following way:
A rising edge counter helps a finite state machine keep track of how many times something has happened. This, like a timer, is a common need for designing the stimuli for transitions in FSMs. A standard counter that you might find on an industrial controller has three inputs: an "up input," a "down input," and a "reset input."
There are many strategies for developing equations of motion that describe the dynamics (motion) of a system. You've been exposed to several in your career as an engineering student. In this course, the dynamic models you will see are differential equations that relate a model's inputs to its outputs. In this course, we will use one of two general approaches to develop a system's governing differential equation:
Both of these approaches can be valid. Often, some combination of both strategies is necessary before a model is complete. Today, we will focus on the first steps towards building a model using a Data-driven approach.
One of the most common ways to investigate a system's behavior when beginning to develop a model for its dynamics is to perform a test that involves changing the system's inputs and watching how its outputs evolve over time. By observing the relationships between the system's inputs and its outputs, we can often make determinations about what sort of mathematical model might be required to describe the system's behavior.
A "step input" is a common type of input used to investigate a system's dynamics. Colloquially, providing a system with a "step input" means that the system begins at an equillibrium defined by a steady-state input and a steady-state output. Then, the input is changed suddenly.
The unit step function (sometimes called the Heaviside step function after Oliver Heaviside) is defined mathematically as:
\begin{equation} u_s(t-t_0)=\left\{ \begin{matrix} 1 & \forall t\geq t_0 \\ 0&\forall t<t_0 \\ \end{matrix} \right. \end{equation}Applying a step change in a system's input is often represented mathematically by applying a scaled version of the heaviside unit step function, which by definition has a magnitude of 1. Generally speaking, a step function of magnitude $U$ that occurs at time $t_0$ can be written:
$$\Delta u(t) = U\cdot u_s(t-t_0)$$True Heaviside step functions represent instantaneous change in a system's input. Instantaneous changes are impossible in a physical system because they would require infinite power to achieve, but step functions are often a good approximation for "sudden" changes in a system's inputs. Step response tests are tests in which an approximate step change in a system's input is applied while the system's output(s) is/are measured.
There are several mathematical definitions of "Stability" for a dynamic system. In this course, when we say that a system is stable, we generally mean that it is "Bounded Input Bounded Output (BIBO)" stable. What does this mean? It means that for any finite input, the system will produce a finite output.
Technically, "proving" that a system is globally stable in this way using experimental data would require giving the system every possible input and watching to make sure that its output stays finite. However, most systems are only subjected to a relatively small range of inputs. Consider your zumo or your lab rig's motor: you can only provide voltage inputs between 0 and 5V to the lab rig, so why worry about how the lab rig acts when it is fed 4,000 volts?
This is the concept of local stability, and without mathematical tools (which we will get to shortly), it's about all we can do to say that if a system is given step inputs with magnitudes within the range of interest, and the system reaches a steady state, it is likely stable. The figure below shows some examples of systems responding to step inputs:
For this purposes of this course, we define "steady state" behavior of a system in response to inputs that are nonperiodic (such as a step input) to occur when the output of the system is no longer changing. This definition changes slightly for periodic inputs (like sine waves).
If a system reaches a steady state (if it is BIBO stable), a dynamic test can also give us information about the steady state ratio of output to input for the system. Think of this like a "calibration:" it tells us "how much output" we get "per unit input" for the system. Mathemtatically, we can define steady state gain for a step response test as the ratio of the change in output over the change in input for a step input as:
$$K_{ss} = \frac{y_{ss}-y_0}{u_{ss}-u_0} = \frac{\Delta y_{ss}}{\Delta u_{ss}}$$For other types of inputs, such as sinusoids, ramps, or impulse functions, the definition is slightly different, but the conceptual definition of steady state gain remains the same. It always tells us how much a system amplifies or attenuates its input.
A system is linear if it satisfies the principle of superposition:
If an input change $\Delta u_1(t)$ produces an output change $\Delta y_1(t)$ and an input change $\Delta u_2(t)$ produces an output change $\Delta y_2(t)$, then
input change $\Delta u(t) = c_1\Delta u_1(t) + c_2 \Delta u_2(t)$ produces an output change $\Delta y(t) = c_1\Delta y_1(t)+c_2\Delta y_2(t)$ for all pairs of input changes $\Delta u_1(t),\Delta u_2(t)$ and constants $c_1,c_2$.
This means that the system satisfies the principle of superposition.
One key consequence of the principle stated above is that if one doubles the input change $\Delta u(t)$ for a system, a linear system will produce exactly double the output change $\Delta y(t)$ at every time $t$.
Unfortunately, essentially all physical systems known to humans are both nonlinear and of essentially infinite order. However, given that we're often only concerned with the behavior of a system within certain small regions of inputs and initial conditions, many physical systems are approximately linear in a particular region of interest to an engineer.
Settling time gives us an idea of how long the transients (changing behavior) of a dynamic system last. We use settling time to classify a system's dynamics as "fast" or "slow" relative to our control system's design goals, or when comparing one configuration of a system to another.
We will link settling time to mathematical models and their properties, but it is an empirical concept that can be obtained simply by analyzing a dataset. The 2% settling time is defined as the last time at which the system's output change $\Delta y(t) = y(t) - y_0$ has an absolute value that is greater than 2% away from $\Delta y_{ss} = y_{ss}-y_0$. This is illustrated graphically below for two systems, one of which is oscillatory, and another which reaches its steady state value asymptotically.
Dynamic systems that store and dissipate energy are often well-approximated using differential equations. Sometimes, we get lucky and can use a linear differential equation to adequately model our system's dynamics. The solutions to linear differential equations satisfy the principle of superposition, and allow us to use a rich set of tools to understand our system's behavior.
When a system cannot adequately be modeled using a linear differential equation, it can often be linearized and treated like a linear system for a limited range of inputs. If it cannot be linearized, numerical integration is still possible as a way to understand how the system will behave.
The following sections focus on the development of linear models for system behavior using a data-driven approach. We will pause at times to connect these models to systems' physics, but we will leave a deep dive into physics-based modeling for a later notebook.
In general, systems we encounter "in the wild" would require and infinite order model to capture all of the system's behavior exactly. Moreover, nearly all real systems are nonlinear. However, many times a system can be approximated by a lower-order, linear differential equation. When we say "order," we mean: how many derivatives are in the model's governing differential equation? In choosing an order for our model, an engineer's job is to count the number of significant, independent energy-storing elements in the system. That number gives us the minimum model order we can use to represent the system's dynamics.
When we look at a dataset from an experiment performed on a system, if the data "looks first order," it is often a clue that the system only has one significant independent energy storing element. If the data "looks second order," we might suspect that two of its energy storing elements are independent and significant. If the model displays a combination of first and second order behavior, it may be time to consider that the model we construct should be greater than second order. Developing an expectation for the model's order is an important step in scoping and constructing an appropriate model for a dynamic system.
The first way most students learn to solve linear, constant-coefficient differential equations is called the "method of undetermined coefficients." This is the first method we will use here, and its steps are as follows.
Given a linear differential equation of order $N$ with constant coefficients in the form:
$$a_N \frac{d^N y}{dt^N} + a_{N-1} \frac{d^{N-1} y}{dt^{N-1}}+ a_{N-2} \frac{d^{N-2} y}{dt^{N-2}} + \cdots + a_1 \frac{dy}{dt} + a_0 y = u(t)$$Which can be written compactly as:
$$\sum_{n=0}^{n=N} a_n \frac{d^n y}{dt^n} = u(t)$$We can say that solutions to this differential equation will satisfy the principle of superposition because the differential equation itself is linear. Therefore, we can separate the problem into finding a "homogeneous solution" where $u(t)=0$, and then adding to this a particular solution in which $u(t)$ is equal to a known function.
Because exponential functions (including complex exponential functions) are the only known functions for which their derivatives are scaled versions of themselves, we know our solution $y(t)$ must take the form of an exponential function. Therefore, we can write the "characteristic equation" for our differential equation by substituting $y(t) = e^{pt}$ into the equation's homogeneous form, where $p$ is an unknown scale factor in the exponential. When we do this, each term $\frac{d^n y}{dt^n}$ becomes $a_n p^n e^{pt}$. By canceling the common exponential term $e^{pt}$ from all terms in the equation, we are left with a polynomial (algebraic) function: $$\require{cancel}$$ $$ a_N p^N \cancel{e^{pt}} + a_{N-1}p^{N-1} \cancel{e^{pt}} + a_{N-2}p^{N-2} \cancel{e^{pt}} + \cdots + a_1 p \cancel{e^{pt}} + a_0 \cancel{e^{pt}} = 0$$ This can be written in compact form as: $$\sum_{n=0}^{n=N} a_n p^n = 0$$
The $N$ solutions to this polynomial equation are called the "characteristic roots" or "eigenvalues" of the system. Using the eigenvalues, the known input function $u(t)$ and $N$ known initial conditions, we can solve the differential equation using the following steps:
Because solutions to linear, constant-coefficient differential equations have solutions that include an exponential function $y(t) = e^{pt}$ as a multiplier, where an eigenvalue $p$ can be either a real number or a complex conjugate pair, we can say that a differential equation model for a system is stable (reaches a steady-state value) if and only if the real parts of all eigenvalues $p$ are strictly less than zero.
The reason for this is intuitive-- consider that if $\alpha$ is some infinitesimal positive number $e^{\alpha t}$ will have a final value of $\infty$ as $t\rightarrow \infty$, which is not a "bounded output." conversely, $e^{-\alpha t}$ will decay to zero, meaning that the models output $y$ will stop changing as $t\rightarrow \infty$.
Scoping a differential equation model for use to describe a collected dataset means:
Once the model is scoped, you are ready to construct it by comparing the behavior of your real system with a differential equation model.
To construct a model from collected data, you will need fit a differential equation to your collected data by characterizing the collected data in terms of the differential equation model you chose. Once you do this, you can use the characteristics of the collected response to find unknown parameters in your differential equation model.
Characterizing (or fully describing) a response in terms of a particular differential equation model requires:
Once these two things are known, you can equate terms in order to build a complete numerical differential equation that fits your collected data.
A generic first-order linear differential equation with constant coefficients has the form:
\begin{equation} \sum_{n=0}^{n=1} a_n \frac{d^n y}{dt^n}=a_1\dot{y} + a_0y= u(t) \end{equation}Because this equation is linear and satisfies the principle of superposition, the solution $y(t)$ to this differential equation can be separated into two parts:
$$y(t) = y_h(t) + y_p(t)$$Where $y_h(t)$, or the "homogeneous solution" is the response of the system when $u(t)=0$ and $y_p(t)$, the "particular solution," is the response of the system to some specific input $u(t)$.
As with all linear differential equations, the stability of a first order system is governed by its characteristic equation. Using the method of constructing the characteristic equation explaned above, we find:
\begin{equation} a_1 p + a_0 = 0 \end{equation}Solving this algebraic equation gives us the system's eigenvalue. In german "eigen" means "own," and this value, which you will recognize as the multiplier in the exponent of the homogeneous solution, is called the system's "own" value because it is a feature of the system's solution that never changes as long as the differential equation remains the same in homogeneous form.
For a first-order system, the eigenvalue or characteristic root is:
\begin{equation} p = -\frac{a_0}{a_1} \end{equation}The system is stable if $p<0$.
The complete response or a first-order linear differential equation to a step input of magnitude $U$ is: \begin{equation} y(t) = \frac{U}{a_0}(1-e^{-\frac{a_0}{a_1}t})+ y_oe^{-\frac{a_0}{a_1}t} \end{equation}
NOTE: We could have found the same result by looking at the zero-initial-condition step response and the initial-condition free response, and simply adding the solutions together! This is another important result of the principle of superposition.
For a first order system, the system's single eigenvalue $p=-\frac{a_0}{a_1}$. If we define the time constant to be $\tau = \frac{a_1}{a_0} = -\frac{1}{p}$, then we can use it to characterize the shape of a first order system.
When $t = \tau$, then the differential equation's homogeneous solution is \begin{equation} y_h(t) = y_0e^{-\frac{t}{\tau}} = y_oe^{-1} = 0.367879y_0 \end{equation}
This means that if $y(\tau) = 0.368 y_0$, then approximately 63.2% of the change from $t_0$ to $t_{ss}$ has occurred when $t=\tau$. For step responses, this means that the time constant can be pulled off of a plot by finding the time $\tau$ at which $y(t) = 0.632(y_{ss}-y_0)$.
A linear, constant-coefficient second-order differential equation has the form:
$$\sum_{n=0}^{n=2} a_n \frac{d^n y}{dt^n}=a_2 \ddot{y} + a_1\dot{y} + a_0 y = u(t)$$As with a first order system, the characteristic equation for a second order system is obtained by substituting $\frac{d^n y}{dy^n}$ with the dummy variable $p^n$ into the equation's homogeneous form with $u(t)=0$. For a second order system, this yields:
$$a_2 p^2 + a_1 p + a_0 = 0$$This equation can be solved using the quadratic formula to obtain the system's eigenvalues, also called the system's poles or characteristic roots. The quadratic formula will always yield two solutions to the characteristic equation for a second-order system.
$$p_1,p_2 = \frac{-a_1 \pm \sqrt{a_1^2 - 4 a_2 a_0} }{2a_2}$$This equation leaves three possibilities:
In all cases, stability of the system depends on the real parts of $p_1,p_2$ both being strictly negative. For an unstable system, the second-order equation will result in an infinite value at steady state. For a stable system, transient behavior depends on whether the eigenvalues are real and repeated, real and distinct, or complex conjugates.
Where $C_5$ and $C_6 $ can be found using the initial conditions and the eigenvalue $\sigma$
$$y_h(t) = e^{\sigma t}(y_0 + t(\dot{y}_0 - \sigma y_0))$$To find step responses of second order linear differential equations that have nonzero initial conditions can be achieved by either following the method of undetermined coefficients as above without the simplification that $y(0) = \dot{y}(0) = 0$, or by using the principle of superposition. By the principle of superposition, one could add the "free response" for a second order system to the "zero initial condition" step response to obtain the total response of the system.
Knowing the form of a second-order system's free and step responses is helpful for recognizing "approximately second order" behavior in a real system and deciding that a second order model scope may be appropriate. But in looking at data we think might be second order, we can also get a lot of information about what our mathematical model might look like by relating generic concepts like stability, settling time, and steady-state gain to mathematical features of a second order model. In addition to these general concepts, second-order systems with complex conjugate eigenvalues oscillate and decay if they are stable. Looking at how these systems oscillate and decay can help us infer physical insights from data that look second order.
Consider a second-order linear differential equation in "standard form": $$a_2 \ddot{y} + a_1\dot{y} + a_0 y = u(t)$$
Its characteristic equation is:
$$a_2 p^2 + a_1 p + a_0 = 0$$After solving for the system's characteristic roots, we will know whether it is:
Each of these types of systems lends itself to a different "standard form" for the characteristic equation.
For the system to be stable, the real parts of both of its eigenvalues must be strictly less than 0.
For an underdamped system, the standard form of the characteristic equation is:
$$p^2 + 2\zeta \omega_n p + \omega_n^2 = 0$$By equating coefficients, we can see that $2\zeta \omega_n = \frac{a_1}{a_2}$ in our original differential equation, and $\omega_n^2 = \frac{a_0}{a_2}$.
For a critically damped system, the characteristic equation is often factored:
$$\left(p+\frac{1}{\tau}\right)^2 = 0$$Where $\tau$ is the "effective time constant" of the system. By equating coefficients, we can see that $\frac{2}{\tau} = \frac{a_1}{a_2}$ in our original equation, and $\frac{1}{\tau^2} = \frac{a_0}{a_2}$ in our original equation.
For an overdamped system, the characteristic equation is often factored:
$$\left(p+\frac{1}{\tau_1}\right) \left(p+\frac{1}{\tau_2}\right) = 0$$The "effective time constants" $\tau_1,\tau_2$ can be found by equating coefficients with our original equation as $\frac{1}{\tau_1} + \frac{1}{\tau_2} = \frac{a_1}{a_2}$ and $\frac{1}{\tau_1 \tau_2} = \frac{a_0}{a_2}$.
Unlike first-order systems, we cannot, in general, say that the response of an overdamped or critically damped second-order system is 63.2% finished with its transient behavior at one time constant $\tau$, but for overdamped systems, this approximation becomes better and better as the eigenvalues $p_1$ and $p_2$ become more and more different, with one decaying "much more quickly" than the other.
Looking at the standard form of the characteristic equation for an underdamped second-order equation:
$$p^2 + 2\zeta \omega_n p + \omega_n^2 = 0$$We can see that the quadratic formula yields solutions $p_1,p_2$ of:
$$p_1,p_2 = - \zeta \omega_n \pm \omega_d j $$Where $\omega_d = \omega_n \sqrt{1-\zeta^2}$ is called the "damped natural frequency" of the system. $\omega_d$ is the same term we used in the section above on homogeneous and step responses for underdamped systems, and it represents the imaginary part of the eigenvalue. It tells us the frequency in radians per second at which the equation oscillates. The "natural frequency" $\omega_n$ tells us how fast the equation would oscillate in the absence of damping, or with a damping ratio of $\zeta = 0$. Note that when $\zeta=0$, the real part of the complex conjugate pair disappears, and the system oscillates at $\omega_d = \omega_n$.
To find $\omega_d$ and $\zeta$ from a plot, which will allow you to find both of the system's eigenvalues, first note that the damped natural frequency can be found using the time $T$, or period, between peaks in an oscillatory response:
$$\omega_d = \frac{2\pi}{T}$$The system's damping ratio $\zeta$ can be found using the log decrement formula.
\begin{equation} \zeta = \frac{\frac{1}{n-1}\left(ln\frac{y_1}{y_n}\right)}{\sqrt{4\pi^2+\left(\frac{1}{n-1}\left(ln\frac{y_1}{y_n}\right)\right)^2}} \end{equation}Where the definitions of $y_1,y_2,\ldots,y_n$ are given by the following figure:
For lightly damped underdamped systems (with small damping ratio $\zeta$, the settling time is approximated similarly to the method used for first-order systems. Because the eigenvalues' real parts are located at $Re(p) = -\zeta \omega_n = \sigma$, the step response of a second order system is bounded by an exponential function $e^{\sigma t} = e^{-\frac{t}{\tau_{eff}}}$, where $\zeta \omega_n = \frac{1}{\tau_{eff}}$ can be thought of as an "effective time constant" for the system. This means that the 2% settling time can be approximated as:
$$t_{s,2\%} \approx \frac{4}{\zeta \omega_n} = 4 \tau_{eff}$$For overdamped systems with large separations between $p_1$ and $p_2$, the slower (larger) effective time constant can be used to compute settling time. For critically-damped or nearly critically-damped systems, the settling time can be computed analytically by finding the point at which the solution never leaves the 2% bounds around its total change $\Delta y_{ss} = y_{ss}- y_0$.
Still the ratio of change in input over change in output, the steady state gain for any standard-form second-order system in response to a step input can be computed as:
$$K_{ss} = \frac{y_{ss} - y_0}{U_{ss} - U_0} = \frac{\Delta y_{ss}}{\Delta u_{ss}} = \frac{1}{a_0}$$First and second order systems are nice when they describe our real system's behavior. They are also nice because finding the response of a higher-order (greater than 2nd order) system can be accomplished using the method of undetermined coefficients-- higher-order systems can only display combinations of first and second-order behavior, because the characteristic equation can only have either real or complex conjugate eigenvalues!
This doesn't change the fact that using the method of undetermined coefficients to solve 3rd and higher order systems can be tedious. We will develop more sophisticated, more efficient tools to deal with higher-order systems as we need them.
A dynamic physical system is a collection of interconnected, physical components. At least one of these components must store energy for the system to display dynamic (time-varying) behavior in the absence of a time-varying input.
Lumped parameter modeling is the act of approximating a real, physical system comprised of interconnected real, physical components, each of which may have spatially or temporally varied physical properties as a system of interconnected idealized components that each have one "lumped," spatially-and-time-invariant physical property.
This is the approach to physics-based modeling we will use in ME480.
When we go to construct dynamic, physics-based models for lumped-parameter, idealized elements (which are themselves one-element systems), we can start building our model by thinking about how the element stores and/or dissipates energy when work is applied to it.
You have seen the law of conservation of energy for a system, also called "the first law of thermodynamics," in your thermodynamics course. It states that a change in a system's internal or stored energy from energy state "1" to energy state "2" $E_{1\rightarrow 2}$ must be caused by either heat transfer $Q_{1\rightarrow 2}$ into our out of the system boundaries, or by work $W_{1\rightarrow 2}$ done to or by the system.
$$E_{1\rightarrow 2} = Q_{1\rightarrow 2} - W_{1\rightarrow 2}$$For this discussion, we will use the "Mechanical Engineering" convention for signs, in which heat transfer into the system is defined positive and work done by the system is defined positive. Note that this means that if a system is doing "positive work" on its surroundings, its stored energy will decrease in the absence of heat transfer. If positive heat transfer occurs at the system boundaries, the system's stored energy will increase. Other sign conventions are possible, and they vary from field to field.
The form of stored energy $E$ and the work done to or by the system $W$ will vary based on whether we are discussing a mechanical, electrical, fluid, thermal, or mixed system or element, but as you are probably aware, the units of all of these types of energy are dimensionally consistent with $\frac{kgm^2}{s^2}$ or "Joules."
When we approximate the physical construction of a system using idealized, lumped-parameter elements, we often say that these elements fall into one of the following categories:
All of the commonly used lumped parameter elements we will discuss fall into one of these three categories. If a system or component does more than one of the three things mentioned above, it can often be split up into idealized elements that only perform one job.
Because we are working towards building physics-based dynamic models of a system, and are interested in how our system's behavior evolves over time rather than just between two energy states "1" and "2," we can shrink the duration over which our first law equation is applied, and look at the rate of energy change in smaller and smaller time intervals. In the limit of $\Delta t \rightarrow 0$, we end up taking the derivative of the first law of thermodynamics. Because the derivative of energy is power, with SI units of Joules/second, we call this the power form of the first law, and it can be applied to a system at any arbitrary moment in time:
$$\dot{E} = \dot{Q} - \dot{W}$$Power comes in many forms-- in this course, we will focus mainly on electrical, mechanical, and incompressible fluid power (flow work), which are the products of two key variables in each case:
In each of the cases above, the variables in the power equation can be grouped into categories based on how they are measured. these two categories are "across" and "through."
It is sometimes safe to assume that the input to a dynamic, physical system is "ideal" in an energetic sense. Idealized sources provide power to the system, and can change its energetic sense.
Idealized sources can provide a known input to a system, which is usually a power variable, regardless of how much power is required to maintain that known input. For example, if the input to a system is force, then we might say that an "idealized force source" is able to provide a as much power as is required to the system to maintain a known input force. These infinite power sources are not real, but many times are a good approximation for systems that operate in a relatively small range of energetic states.
Because power is the product of a T-type and an A-type variable, idealized sources are often classified as either "T-type" sources, which provide a known T-type input regardless of what A-type variable is required, or "A-type" sources, which provide a known A-type variable regardless of what T-type variable is required.
Note that in the figure, a line representing the possible behavior of a "real" power source, which only has a finite available power, is also included. Many real power sources can be approximated as ideal T-type or A-type sources if the power required by the system is low relative to the power source's capability.
Common idealized T-type sources include:
Common idealized A-type sources include:
Idealized energy-storage elements are a useful approximation of many real objects that store energy. Stored energy comes in many forms: mechanical kinetic and potential energy, chemical energy, and thermal energy are all examples. If an object primarily stores only one type of energy, it may be a good candidate to treat as an idealized energy storage element.
Idealized energy storage elements (except those that store thermal energy explicitly) are assumed to have no heat transfer in or out. The net work done on or by the element must balance with its stored energy, which can be written formally using the first law of thermodynamics by ignoring heat transfer.
$$\dot{E} = \cancel{\dot{Q}} - \dot{W}$$Further, the lumped-parameter, idealized energy storage elements used to construct dynamic physical models are assumed to store only one type of energy, and to exchange energy with their surroundings using only one type of work.
For each particular type of energy storage element, these assumptions have different consequences. For fluid capacitors, any kinetic energy due to fluid entering or leaving the capacitor is ignored. For springs, the mass of the spring is ignored. For idealized rotational inertias, any material elasticity that could store potential energy is ignored. The list goes on, but the key thing to remember is that while no real object is actually an idealized, lumped-parameter energy storage element, many real objects are well-approximated by these assumptions because one form of energy storage vastly dominates any other relevant terms in the first law equation above.
Energy storage in fluid, mechanical, and electrical systems is usually accomplished by accumulating either the T-type or the A-type variable in the power equation (not both). In mechanical systems, a spring stores potential energy as $E = \frac{1}{2}K x_{12}^2$, where $x_{12}$ is the spring deflection. Substituting Hooke's law into this equation yields $E = \frac{1}{2K} F ^2$, where $F$ is the force in the spring. Because the energy equation can be written in terms of the T-type variable $F$, the spring is considered a T-Type energy storage element. Conversely, an electrical capacitor, which stores energy as $E = \frac{1}{2} C V_{12}^2$, is considered an A-type energy storage element because voltage is an electrical system's across-type power variable.
Classifying energy storage elements this way allows us to draw analogies between different system types. It allows us to treat mechanical springs similarly to fluid inertors, capacitors similarly to masses, and so on.
An idealized energy storage element's "elemental equation" is a restatement of the first law of thermodynamics in power form. An elemental equation uses an empirical relationship, e.g. Hooke's Law in the case of a mechanical spring, and combines that relationship with the first law to tell us how the element's energy is accumulated in terms of the element's lumped parameter, or dominant physical characteristic.
A list of the elemental equations for the lumped-parameter energy storage elements we may encounter in ME480 is shown below, along with the equation for the element's stored energy.
Idealized energy-dissipation or "dissipative" elements are a useful approximation of many real objects that do not store significant energy, but for which the net work done on or by the object is not zero. In order to satisfy the conservation of energy, this type of idealized element must transfer energy as heat to the environment, which is why they are called dissipative elements. They result in energy leaving the system boundaries as heat.
All real systems dissipate energy. If they did not, we would have perpetual motion machines! Idealized, lumped-parameter dissipative elements are often used to model the major dissipative processes and components in real systems.
Idealized dissipative elements are assumed to store no energy. The net work done on or by the element must balance with the heat transfer at the element's boundary. This can be written formally using the first law of thermodynamics by ignoring all energy storage.
$$\cancel{\dot{E}} = \dot{Q} - \dot{W}$$Further, the lumped-parameter, idealized energy dissipation elements used to construct dynamic physical models are assumed to exchange only one type of work with their surroundings. This could be electrical work, flow work, etc., but not a combination of these types.
An idealized energy dissipation element's "elemental equation" is a restatement of the first law of thermodynamics in power form. An elemental equation uses an empirical relationship, e.g. Ohm's law in the case of an electrical resistor, and combines that relationship with the first law to tell us how the element transfers heat outside of the system boundary in terms of the element's lumped parameter, or dominant physical characteristic.
A list of the elemental equations for the lumped-parameter energy dissipation elements we may encounter in ME480 is shown below, along with the net power consumed by each element, which all must be transferred to the element's surroundings as heat.
Idealized, lumped-parameter power converting transducers are often used to represent physical objects in a system that transform energy from one form to another. Motors convert electrical work to mechanical work. Pumps convert fluid flow work into mechanical work. Gears convert mechanical work at one angular velocity to mechanical work at another angular velocity. The key characteristics of power-converting transducers are that the input work and the output work are the same, meaning that the idealized transducer neither stores nor dissipates energy.
The power-converting transducer only does "one job," and that is power conversion. Therefore, if something can be said to be adequately modeled as a power-converting transducer, it cannot store energy or transfer it to the system's surroundings. In other words, its first law of thermodynamics equation in power form looks like this:
$$\cancel{\dot{E}} = \cancel{\dot{Q}} - \dot{W}$$These simplifications mean that the net work on the transducer must be zero-- in other words, the power into the transducer and the power out of the transducer must be the same.
$$\dot{W}_{in} = \dot{W}_{out}$$For a gear train, the assumption of no energy storage would mean that the gears must be massless. The assumption of no heat transfer would mean that the gears have no friction or damping in their bearings. For a motor, the assumption of no energy storage would mean that the motor shaft has no inertia. The assumption of no heat transfer would mean that the motor's armature has no electrical resistance, and that its rotating assembly has no damping. Are these reasonable assumptions?
Probably not. Approximating real transducers using only an idealized transducer element is often a mistake-- most real motors, pumps, and gears have losses and/or intrinsic inertias that store energy. These "extra" pieces of a real power transducer can be represented by breaking the real power transducer into a couple of lumped-parameter idealized elements. Often, the full set of elements needed to fully describe a real power transducer include energy storage and/or dissipative elements along with an idealized transducer. This separates each of the real energy storage, conversion, and dissipative processes in the transducer into "chunks."
You will almost never scope a physics-based model for a system using only one idealized lumped parameter element. However, as the number of elements in your model increases, it can be increasingly difficult to keep track of how the elements influence one another and transfer energy to each other or to the environment.
One convenient and powerful way to help visualize how power (and thus energy) flow in a network (system) of interconnected lumped-parameter elements is to represent a system as a kind of "circuit," regardless of whether the system in question is electrical, mechanical, fluid, or mixed.
This creates a problem, in that it is hard to draw masses, tanks, or viscous dampers as "circuits" in the traditional sense. To make this operationally easier, system dynamicists have come up with several systems for drawing generalized circuit-like networks for use in understanding how a system's components interact. We will explore such a "universal" method to help operationalize model construction in ME480.
The approach we will use is based on prof. Seeler's system dynamics textbook. The methodology is called the Linear Graph method, and it was developed at MIT in order to treat all systems of one-dimensional, idealized, lumped-parameter elements in a consistent way. The law of the land in the linear graph method is "no special symbols," so while everything is represented as a circuit-like network of elements connected by "nodes" (points of constant across-type variable), the symbol for an electrical resistor looks the same as a symbol for a mechanical damper.
In a linear graph, idealized sources are represented as lines coming from "ground," which represents the zero reference point for the Across-type variable relevant to the type of system in question (velocity, pressure, or voltage). The source can be either an idealized Through (T) type source or an idealized Across (A) type source.
Lumped parameter energy storage and energy dissipating elements in linear graphs are drawn as simple arrows. These elements can all be thought of as having an "input port" and an "output port" for the through-type variable. Why?
Think about a spring or a damper, which both physically have two connections, one at each end. A section analysis and free body diagram of an idealized, massless spring or damper would show that the net force is the same throughout the element, so force is thought of as flowing through the element from one connection, or "node," to the next.
Any two-port element in a linear graph is drawn this way. Some examples are shown below.
The only time a lumped-parameter "two port" element is drawn differently is in the case of an A-type energy storage element that really only has one physical connection. For example, fluid capacitors store energy in pressure that is always measured with respect to the surrounding pressure, which is the "ground node" or zero reference point for the A-type variable in the system. Similarly, translational masses and rotational inertias store energy in velocity, but this velocity must be referenced to an inertial frame in order for Newton's laws to apply. Therefore, they too must always be connected to "ground." The tricky part is that these elements aren't physically connected to the ground reference, so we draw them with a dotted line to show that they are referenced to ground but not physically connected. Examples are shown below.
In a linear graph, idealized transducers connect portions of the diagram that represent different power types-- for example if we are talking about a motor, the power into the motor has an across type variable of voltage and a through type variable of current. The power out of the motor has an across type variable of angular velocity and a through type variable of torque. The generic symbol for an idealized transducer is shown below:
Note that the "ground" reference is not connected between the two "sides" of the transducer. This is to symbolize that the A and T type variables on each "side" of the transducer might have different units. It's also worth mentioning that a transducer is the only type of element we will use in a linear graph that has four "ports" rather than two-- it has two ports for its "input" power, and two ports for its "output" power.
In system dynamics, a "node" is a fictitious element in your system through which power is transferred without energy storage or "losses" (energy transfer to outside the system boundary). In electrical, fluid, and mechanical systems, a "node" represents a fictitious place where no heat transfer occurs from the system to its environment and no energy is stored.
In terms of the first law of thermodynamics, these assumptions result in the following:
$$\require{cancel}$$$$\cancel{\dot{E}} = \cancel{\dot{Q}} - \dot{W}$$This means that the net work on the node must be zero.
We often imagine that idealized, lumped-parameter elements in our system model are connected to one another through this type of lossless interface. the node "splits" the through-type variable in the power equation, distributing it to the elements to which it is connected. A single node will by definition be a place in your system where the across type variable in the power equation is constant. Jumper wires and breadboard rows are examples of "nodes" in electrical circuits. Rigid connections between a mass and a spring could be conceived of as "nodes" in a mechanical system. A short section of pipe with negligible fluid resistance where multiple flows split or come together in a fluid system also might be well approximated by a "node" of constant pressure.
The principle of continuity is a re-statement of the first law of thermodynamics with specific assumptions. For many types of systems, the assumptions boil the first law of thermodynamics down to a simple statement about conservation of mass. In these cases, the principle of continuity states:
"flows into a node must balance with flows out of a node,"
Where the definition of "flows" depends on the type of system at hand. For electrical systems, the principle of continuity at a node boils down to a "conservation of current," or Kirchoff's Current Law:
$$\sum i _{node} = 0$$With currents into a node considered positive, and currents out of a node considered negative.
For a mechanical system, Force is the quantity that "flows through" elements via Newton's 3rd law (making an imaginary cut anywhere along a rigid body and drawing a free body diagram will confirm this). Thus, a "node" in a mechanical system is a rigid, imaginary element with "no" mass. Often, a node is superimposed on top of a translating or rotating idealized mass, and we imagine that the inertial reaction force from that idealized mass flows into or out of the node as appropriate. This is similar to the use of the D'Alembert principle which conceives an "inertial reaction force" exerted by a mass rather than thinking about the familiar form of Newton's second law, $\sum F = m\dot{v}$. The D'Alembert vs. Newtonian viewpoint is probably responsible for the war against "centrifugal force" waged in many high school physics classrooms across the U.S.
Writing the first law of thermodynamics for a node in a mechanical system, and canceling that its (single) velocty from the first law equation, one obtains:
$$\sum {F}_{node} = 0$$For a fluid system (incompressible), a "node" might represent an infinitesimal junction between two pipes or valves, or a junction between a valve and a tank. Writing the first law of thermodynamics for this "infinitely thin" element in which only one pressure exists (and can thus be canceled), one obtains:
$$\sum \dot{\mathcal{V}}_{node} = 0$$With volumetric flows into the node being considered positive, and flows out of the node negative.
The principle of continuity states that the sum of the voltage drops around any closed "loop" in a circuit-like network must be zero. This is a direct consequence of the first law of thermodynamics. Consider the following circuit, for which 3 closed loops exist. The circuit diagram and the linear graph are shown side by side for comparison. They contain exactly the same information.
For Loop 3, the principle of compatibility states: $$\require{cancel}$$ $$ V_{g1} + V_{12} + V_{23} + V_{34} = 0$$
Which, given the definition of $V_a - V_b = V_{ab}$, could be re-written as:
$$ \cancel{V}_{g} - V_1 + V_1 - V_2 + V_2 - V_3 + V_3 - \cancel{V}_{g} = 0$$Similarly, for Loop 1, the principle of compatibility states:
$$V_{g1} + V_{12} + V_{2g} = 0$$And for Loop 2, the principle of compatibility states:
$$V_{g2} + V_{23} + V_{3g} = 0$$Compatibility can also be appled to fluid systems (replacing voltages with pressures) and to mechanical systems (replacing voltages with velocities).
The aim of the linear graph method is to take a diagram showing physical components of a system and to turn it into a circuit-like network of idealized, lumped-parameter elements. The model must be properly scoped before a linear graph model is constructed. This means that the model's inputs, outputs, and its list of constitutive idealized elements must be known before a linear graph is constructed.
To construct a linear graph, you can follow the general procedure below.
Once your linear graph representation of your system is complete, you are ready to continue with model construction. The linear graph can be treated like an electrical circuit. The principles of continuity (KCL) and compatibility (KVL) will be helpful in building a differential equation or set of differential equations describing your system.
Model scoping for a dynamic, lumped-parameter model can start similarly to model scoping for a purely empirical model.
After you have scoped your lumped-parameter dynamic model, it is time to move on to model construction. What follows is one possible process you could follow. There are others, but this is the one we will refer to in ME480. You have followed a similar process in ME352, so we will not provide exhaustive, detailed examples here.