Skip to main content
  • Dayton, Ohio, United States

john doty

Simple equations of state, such as the ideal gas equation, are often used in modeling and simulation applications. In this investigation, a statistical approach is presented to develop an Equation of State (EOS) to quantify the... more
Simple equations of state, such as the ideal gas equation, are often used in modeling and simulation applications. In this investigation, a statistical approach is presented to develop an Equation of State (EOS) to quantify the nonideality of air (compressibility factor) as a function of the reduced temperature and reduced pressure. Comparison of the surrogate model for compressibility factor (Z) with validated results from Engineering Equation Solver (a standard thermodynamic database) indicates that the surrogate model has an overall mean absolute deviation of less than 0.5% over the entire pressure and temperature range studied. The surrogate model accepts both reduced temperature and reduced pressure as inputs and returns the property value of interest as well as the error. The range of temperature and pressure selected corresponds to those for aerospace applications. To demonstrate the utility of the compressibility factor constructed, an application is presented for an airfoil subjected to a hypersonic flow.
The initial development of a statistically-based process for validation of computational experiments is presented. The focus of this newer methodology is Uncertainty Quantification (UQ), Sensitivity Analysis (SA), and variance reduction.... more
The initial development of a statistically-based process for validation of computational experiments is presented. The focus of this newer methodology is Uncertainty Quantification (UQ), Sensitivity Analysis (SA), and variance reduction. The two types of uncertainty (aleatory and epistemic) are incorporated into the methodology. For this investigation, the behavior of the simulation inputs is presumed unknown (aleatory due to lack-of-knowledge) and is therefore modeled using equally-probable uniform distributions. These distributions have known and quantifiable uncertainties that are used to determine the sampling residuals that are expected to follow a random normal distribution (epistemic with known mean and variance). Statisticallybased sample sizes and uncertainty propagation limits are determined based upon a priori tolerance specifications as well as Type I and Type II risks. Newer quasi-random sampling procedures are demonstrated to be superior to classical pseudo-random sampling procedures. A simple example is presented that outlines the methodology that is extensible to all validation processes.
A statistically-designed error control procedure is presented for determining proper sample size of computational simulations for use in validation experiments. This procedure simultaneously invokes probabilistic knowledge of Type I error... more
A statistically-designed error control procedure is presented for determining proper sample size of computational simulations for use in validation experiments. This procedure simultaneously invokes probabilistic knowledge of Type I error and Type II error estimates for Uncertainty Quantification in an a priori fashion. In this manner, the individual performing the simulations has estimations of how well the simulation data is likely to match the validation data before the experiments are performed. Visualizations of the procedure are detailed based upon standard Operating Characteristic (OC) curves as well as standard graphical technique using nomographs. A new multidimensional graphical procedure is presented for enhanced visualization of the error response surface. This surface dramatically enhances the users ability to determine sample sizes and potential outlier regions with minimal statistical skills required. A probabilistic numerical algorithm is developed for numerical simulation in MATLAB. The algorithm provides robust solution to the difficult mixed integer and real-valued optimization problem with simultaneous development of the graphical visualizations of the response surfaces. The standard graphical procedure, the newly-developed multidimensional visualization, and numerical scheme all produce the same results for sample size for simultaneous control of Type I and Type II errors.
In this paper, two approaches for modeling an aircraft thermal system were investigated. The system, comprising a fuel thermal management system, an air cycle machine for ECS cooling, and a turbofan engine, was modeled with First Law and... more
In this paper, two approaches for modeling an aircraft thermal system were investigated. The system, comprising a fuel thermal management system, an air cycle machine for ECS cooling, and a turbofan engine, was modeled with First Law and Second Law thermodynamic analyses to determine energy transfers and entropy generation. In the rst modeling approach, heat transfer rates were specied directly, and in the second, heat transfer rates were calculated based on heat exchanger eectiveness. The models were parameterized in terms of relevant system design variables including temperatures, heat exchanger properties, ow rates, and pressure ratios. Using the models, parametric design space exploration studies were conducted by applying multivariate data visualization and optimization approaches based on parallel coordinates. The investigations focused on three primary aspects: (1) Identication of important design parameters driving system performance, (2) Second Law feasibility of heat transfers, and (3) optimality in terms of engine thrust and fuel burn performance. A design of experiments (DOE) was run using the model in order to populate the design space for the studies. To allow rapid and interactive trade space explorations, articial neural network surrogate models were regressed based on the DOE results. The results of the studies illustrate the signicant inuence of the choice of modeling approach on the feasibility and optimality of aircraft thermal system designs.
A new formulation is presented for calculation of unsteady entropy generation rate for thermophysical systems. This formulation has far-reaching implications for analysis and design of all systems. It has long been considered that direct... more
A new formulation is presented for calculation of unsteady entropy generation rate for thermophysical systems. This formulation has far-reaching implications for analysis and design of all systems. It has long been considered that direct calculation of entropy generation rate was impossible due to the insufficiency of independent information for simultaneous determination of both unsteady entropy change and entropy generation rate. The formulation presented herein is consistent with numerical implementations and presents a unique approach to solving this dilemma. Entropy is re-cast is terms of other, independent, state properties via the state postulate of thermodynamics and then the rate of change of entropy is represented in terms of other known state derivatives. The concept is generally applicable to physical systems as long as the state postulate is valid and the state derivatives are known. The application of this formulation allows for the path-dependent entropy generation rate to be directly calculated.
This paper details statistical concepts to systems-level applications with relevance to integration, operation, and optimization of engineering components and systems. A physicsbased exergy analysis is combined with system performance... more
This paper details statistical concepts to systems-level applications with relevance to integration, operation, and optimization of engineering components and systems. A physicsbased exergy analysis is combined with system performance goals. Designed experiments are used to pre-determine relevant simulation points for the analysis in order to develop the statistical models most effectively and efficiently. The results of the simulation are then processed via advanced statistics to create a surrogate model that identifies the component and/or system within desired or anticipated operational ranges. These statistical surrogate models represent the system in a modular fashion. In this manner, a statistical module may be interfaced independently from its origination and a “system of systems” may be built from the surrogate modules that may be used to efficiently investigate engineering trades and perform preliminary design studies. Quantitative examples from aerospace components and systems are used to demonstrate the overall process.
A new metric for the design and analysis of particle image velocimetry (PIV) experiments is presented. This metric utilizes the theory of maximum work potential (exergy destruction minimization) and statistical regressions of the... more
A new metric for the design and analysis of particle image velocimetry (PIV) experiments is presented. This metric utilizes the theory of maximum work potential (exergy destruction minimization) and statistical regressions of the experimental data in order to obtain a ...
We extend the energy and exergy-based methodology presented in previous work 1,2 from analysis to preliminary design. Therein, a three-component system was modeled in which heat transfer from the energy source was allowed, while the other... more
We extend the energy and exergy-based methodology presented in previous work 1,2 from analysis to preliminary design. Therein, a three-component system was modeled in which heat transfer from the energy source was allowed, while the other devices were considered to be reversible. Those single-parameter studies yielded many results which were physically impossible, clearly suggesting that the analysis/design paradigm be changed from energybased to exergy-based. Here, we extend the analysis to preliminary design applications. A steam turbine with fixed inlet conditions was modeled thermodynamically and simulated in MATLAB using three design variables: turbine exit quality, turbine exit pressure, and turbine heat transfer. A statistically generated test matrix was developed using Design of Experiments (DOE) for three test cases. For the first test case, only the quality was varied while exit pressure and heat transfer remained constant. For the second test case, both quality and exit pressure were varied while heat transfer remained constant. The last case allowed all three design variables to vary simultaneously. The test matrix was analyzed using a 1 st Law as well as combined 1 st and 2 nd Law methodology to determine turbine specific work and exergy destruction for the system. Results from the test cases were analyzed to generate surrogate models used for turbine optimization.
Physics-based modeling and simulation requires proper formulation of the fundamental equations for numerical solution in order to best-represent the physical system being modeled. A standard engineering method to calculate the entropy... more
Physics-based modeling and simulation requires proper formulation of the fundamental equations for numerical solution in order to best-represent the physical system being modeled. A standard engineering method to calculate the entropy generation rate is to assume steady flow and then use the steady entropy balance equation. This approach is strictly valid for truly steady flows and is improperly posed for dynamic systems. Another approach often implemented in numerical schema is to assume that the entropy rate of change is the system can be approximated discretely (e.g. employing finite difference and/or finite volume schema). This discrete approach typically requires knowledge of how the entropy changes dynamically with temperature and pressure and is most often implemented using ideal and/or perfect gas relations. An equivalent approach is presented using real gas relations that may be more relevant to aerospace applications in which real gas effects are important.
An inferential-based statistical analysis is employed using Design of Experiments (DoX) that is applied to a 10-parameter nonlinear wing weight model. An equivalent surrogate model is developed that is just as accurate but more efficient... more
An inferential-based statistical analysis is employed using Design of Experiments (DoX) that is applied to a 10-parameter nonlinear wing weight model. An equivalent surrogate model is developed that is just as accurate but more efficient than the original model. Several DoX design candidates were evaluated, with the five-factor, ½-fractional factorial augmented with center points selected as the final design. Only five (5) of the original 10 variables in the wing weight model “drive” the results, with the other five (5) variables adding no value. Compared to an exhaustive, enumerated search, the surrogate model resulted in a 98% reduction in the number of evaluations required to produce the optimized weight to within 1% of the full model. If properly implemented, statistically-based DoX can dramatically reduce the number of trials required to adequately characterize a system’s behavior and performance with minimal impact on the accuracy and precision of the results.
The concepts of stochastical mathematics are applied to two classes of engineering problems of interest: one for which a known physical-mathematical model exists; and one for which only experimental data exists with no underlying physical... more
The concepts of stochastical mathematics are applied to two classes of engineering problems of interest: one for which a known physical-mathematical model exists; and one for which only experimental data exists with no underlying physical model. These disparate applications serve to demonstrate the flexibility and extensibility of stochastics as well as engender a sense of robustness of the methodologies. For the case of the known physical model, the numerical solution of the model provides a benchmark to compare the stochastical estimates. However, for the case in which only data exists, probabilistic stochastics, in the sense of most likely estimates, may be used to capture the data trends and magnitudes of the data inferentially. In both cases, the stochastical models exhibit excellent agreement to the data. The stochastical concepts are therefore demonstrated to be robust and widely applicable to many engineering problems of interest.
System optimization and design of an aircraft is required to achieve multiple objectives. Often one of the main objectives is system efficiency for reduction in fuel use for a given mission. System efficiency can be quantified by either a... more
System optimization and design of an aircraft is required to achieve multiple objectives. Often one of the main objectives is system efficiency for reduction in fuel use for a given mission. System efficiency can be quantified by either a 1st or 2nd law thermodynamic analysis. A 2nd law exergy analysis can provide a more robust means of accounting for all of the energy flows within and in between subsystems. These energy flows may be thermal, chemical, electrical, pneumatic, etc. The incorporation of a transient system analysis in the design process of an aircraft can provide untapped opportunities for gains in energy efficiency of the aircraft's operation. In order to quantify the efficiency gains utilizing a 2nd law exergy analysis, the non-equilibrium term of exergy generation must be accounted for in the analysis. This paper demonstrates the implementation of a non-equilibrium exergy analysis of a heat exchanger.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests: