Select one or more workshop below:
Move your mouse below to see the title of the Workshop

Subscription successful!

Speakers – Abstracts

Workshops

 

W1- Recent Developments in Modeling and Numerical Simulation of Subsurface Flows – Luiz Felipe F. Pereira – University of Wyoming – USA- Part I

Abstract - There are several challenges associated with the modeling and numerical simulation of multiphase subsurface flow problems. These problems are genuinely multiscale in space and, within each spatial scale, one needs to resolve the sharp gradients and dynamics evolving at vastly different rates that are the hallmarks of multiphase flows in porous media. The speakers in this mini-symposium shall discuss very recent development in the pore (a few millimeters), Darcy (a few centimeters) and field (a few kilometers) scales. Accurate predictions at the field scale require that shorter scales inform larger scales through scientifically correct up-scaling procedures. At the pore scale new parallel direct numerical simulations of the governing equations in 3D images of porous media will be discussed, aiming at producing coefficients of the governing equations at the Darcy scale. At the Darcy scale new multiphase experiments, models and accurate numerical techniques, that can take into account the deformation of aquifer or reservoir rocks, will be presented. At the reservoir scale accurate predictions based on Markov Chain Monte Carlo strategies, that take into account the underlying uncertainty in the determination of properties of subsurface formations, will be discussed.

Speakers

  1. Maicon R. Correa – Universidade Federal de Juiz de Fora – Brazil - A Semi-Discrete Central Scheme for Scalar Hyperbolic Conservation Laws with Heterogeneous Storage Coefficient and its Application to Porous Media Flow.

    Abstract: In this work we present a new Godunov-type semi-discrete central scheme for a scalar conservation law based on a generalization of the Kurganov and Tadmor (KT) scheme which allows for spatial variability of the storage coefficient (e.g. porosity in multiphase flow in porous media) approximated by piecewise-constant interpolation. We construct a generalized numerical flux at element edges based on a non-staggered inhomogeneous dual mesh which reproduces the one postulated by Kurganov and Tadmor under the assumption of homogeneous storage coefficient. Numerical simulations of two-phase flow in strongly heterogeneous porous media illustrate the performance of the proposed scheme and highlight the important rule of the permeability-porosity correlation on finger growth and breakthrough curves.

  2. Benoit Noetinger - Ingénieur de recherché - IFP Energies nouvelles, France Mathematics in the subsurface: managing concepts, data, models and large uncertainties.

    Abstract: Increasing computing facilities render possible intensive simulations of multiphase flow in the subsurface. Applications range from oil and gas industry, water management and subsurface waste disposal. These simulations impose solving coupled non linear transport equations on complex geometries that can be represented by billion grid-block models. On the other hand, increasing field development costs and societal pressure impose strong risk management constraints. So, the focus relies not only in getting a very accurate single simulation, but in sampling various scenarios or parameter values. Developing practical tools in such a context imposes addressing some advanced generic mathematical issues such as stochastic PDE's, homogenization and multiscale methods, inverse problems and uncertainty quantification. The presentation will focus on some "hot topics" of these issues and on some promising approaches.

  3. Victor Ginting - Department of Mathematics - University of Wyoming USA - A Bayesian MCMC for Efficient Uncertainty Quantification in Permeability and Porosity of Reservoir Models.

    Abstract: Quantifying uncertainty in porosity and permeability is crucial in subsurface flows. We describe a novel technique for this quantification that enables predictive simulation of the flows. The technique allows for conditioning the sampling process with available dynamic measurement data. We combine Bayesian framework with a two-stage Markov Chain Monte Carlo (MCMC) method for reconstructing the spatial distribution of permeability and porosity. A computationally inexpensive coarse-scale model is employed to screen proposals prior to fine-scale simulations. Numerical results are presented to demonstrate the performance of the technique.

  4. Marcos Mendes - Center for Fundamentals of Subsurface Flow, University of Wyoming USA - A two-phase, three-component model for CO2 injection in brine aquifers.

    Abstract: We present an efficient IMPES (Implicit Pressure, Explicit Saturation) discretization strategy to solve a two-phase (brine and CO2 -rich), three-component model for CO2 injection in brine aquifers. Numerical results will show the accuracy of the proposed strategy and illustrate the hole of some physical properties (e.g Brine salinity) in the capture of CO2. We also illustrate the advantages of using object oriented programming for the construction of a general compositional model simulator capable to integrate distinct numerical schemes implemented in a cooperative research environment.

 

W2- Recent Developments in Modeling and Numerical Simulation of Subsurface Flows – Luiz Felipe F. Pereira – University of Wyoming – USA- Part II

Abstract – Please, see abstract W1

Speakers

  1. Marcio Murad - Laboratório Nacional de Computação Cientifica, Brazil - Coupled Hydro-Geomechanical Modeling of Pre-Salt Reservoirs.

    Abstract: In this talk we analyze the performance of some algorithms for coupling geomechanics and multiphase flows in strongly heterogeneous carbonate reservoirs along with the mechanics of the cap rock consisting of a saline formation which undergoes poroviscoelastic (creep) effects. By considering the input poromechanical parameters as random space functions log-normally distributed with a given correlation structure we developed new locally conservative numerical schemes to capture in an accurate manner the underlying hydro mechanical coupling phenomena. For each realization of the input random fields we compare the local mass conservation within the context of the so-called one-way and two-way hydromechanical coupling algorithms. Numerical examples are performed to validate the numerical schemes proposed herein.

  2. Antonio Luiz Serra de Souza - Petroleum Engineer - Senior Consultant - PETROBRAS/CENPES/PDP/GR - Brazil - A Geomechanical Simulation Study Applied to a Campos Basin Field.

    Abstract: Reservoir geomechanics have become a relevant subject in petroleum industry in the last few years. This happened due to several factors, such as the impact of compaction and dilation in oil production and surface subsidence, fault reactivation and fracture propagation during improved oil recovery. At Petrobras, reservoir geomechanics became fundamental for the development of fields with a water flooding project and a large number of faults, like several reservoir sandstones in Campos Basin. More recently, the carbonates of the pre-salt reservoirs also became a target for those studies, which include some naturally fractured reservoirs and some chemical compaction effects. During the last three years, several geomechanical studies were conducted for Campos Basin fields, in order to determine the optimum water injection pressure, so that the maximum oil recovery could be achieved without a dangerous fault reactivation. One particular field (called X here) has a very large number of faults, so that a detailed geomechanical study began in 2010 in order to obtain the optimum injection pressure for Field X wells. The study is composed of the following steps:

    1 - Data audit, including laboratory, logging and field data, such as leak-off tests;

    2 – Mechanical Earth Model (MEM), defining stresses magnitude and directions;

    3 – Geological restoration, calculating stresses and strains since geological times until the beginning of the field production;

    4 – Numerical simulation, using a reservoir simulator coupled to a geomechanical simulator, to determine the compaction and pressure limits for water injection;

    This work describes the methodology and models as well as the main results of the steps above. Further, the next steps for the geomechanical studies of the main Petrobras fields are also presented. The main conclusion is that a geomechanical analysis will be one of the main steps to an oil field development in the next few years.

  3. Marco Cardoso - Petroleum Engineer - Senior Consultant - PETROBRAS/CENPES/PDP/GR - Brazil - Reduced-Order Modeling Procedures for Production Optimization.

    Abstract: A trajectory piecewise linearization (TPWL) procedure for the reduced-order modeling of two-phase flow in subsurface formations is developed and applied. The method represents new pressure and saturation states using linear expansions around states previously simulated and saved during a series of preprocessing training runs. The linearized representation is projected into a low-dimensional space, with the projection matrix constructed through proper orthogonal decomposition of the states determined during the training runs. The TPWL model is applied to two example problems, containing 24,000 and 36,000 grid blocks, which are characterized by heterogeneous permeability descriptions. Extensive test simulations are performed for both models. It is shown that the TPWL model provides accurate results when the controls applied in test simulations are within the general range of the controls applied in the training runs, even though the well pressure schedules for the test runs can differ significantly from those of the training runs. This indicates that the TPWL model displays a reasonable degree of robustness. Runtime speedups using the procedure are very significant, a factor of 100-2000 (depending on model size and whether or not mass balance error is computed at every time step) for the cases considered. The preprocessing overhead required by the TPWL procedure is the equivalent of about 6-8 high-fidelity simulations. Finally, the TPWL procedure is applied to a computationally demanding multiobjective optimization problem, for which the Pareto front is determined. Limited high-fidelity simulations demonstrate the accuracy and applicability of TPWL for this optimization.

  4. Felipe Pereira - Center for Fundamentals of Subsurface Flows - University of Wyoming - USA - A Multiscale Mixed Method for Porous Media Flows.

    Abstract: We use a non-overlapping iterative domain decomposition procedure based on the Robin interface condition to develop a new multiscale mixed method to compute the velocity field in heterogeneous porous media. Hybridized mixed finite elements are used for the spatial discretization of the equations. We define local, mixed multiscale basis functions to represent the discrete solutions in subdomains. Appropriate subspaces of the vector space spanned by these basis functions can be considered in the numerical approximations of heterogeneous porous media flow problems. The balance between numerical accuracy and numerical efficiency is determined by the choice of subspaces.

    A very detailed description of the numerical method is presented, along with a discussion of the implementation of the procedure in CPU-GPU clusters. Following that, numerical experiments are presented illustrating the important features of the new method and comparing computed results with ones derived from fine grid simulations.

 

W3 - Mathematical Models in Finance – José Mario Martínez - University of Campinas - Brazil

Abstract - Modern finances and economics theories and instruments rely heavily on mathematical models and methods. The technological advancement of the last couple of decades changed the way of trading and information spreading and yielded a number of new financial instruments, increased the number of participants at markets and introduced new investment theories. The main role of mathematical models is to provide tools for better understanding of markets for different purposes – from mastering the fundamental principles through risk management to regulatory requirements and mechanisms. Thus the models are being used by financial industry, business ventures, individual investors, regulatory bodies and researchers. As the main characteristics of real problems are high level of uncertainty and complexity of mutual dependence of factors that influence economic and financial activity, the models are based on sophisticated mathematical knowledge in many different areas and advances in mathematical research are now deeply intertwined with the theory and practice of financial markets. This workshop will treat a small part of mathematical finance represented by models that are showing different aspects of this vast area. The presentations will include modeling dynamics of interest rates, option pricing and decision making under uncertainty, risk management at equity markets, the role of risk models in regulatory framework and applications of mathematical methods in algorithmic trading.

Speakers

  1. Eduardo A. Prado, Itaú –Unibanco – Brazil - Quantitative methods in a large financial institution.

    Abstract: We will show some examples of applications of how quantitative methods are been used in order to optimize financial results.

  2. Natasa Krejic, University of Novi Sad – Serbia - Optimization Models in Algorithmic Trading.

    Abstract: Algorithmic Trading, also known as Algorithmic Execution, is the automated process of trading exogenous orders in electronic (stock) exchanges. It became widely available to all market participants over the last decade and is now the dominant way of trading in stock exchanges. In capital markets where even marginal competitive edge by one institution is rewarded with disproportionately large profits, many efforts are directed towards cost reducing in the execution process. Execution of orders itself is an exceptionally complex problem dealing with many uncertain factors. Modeling at least some of these uncertain factors is a fundamental part of algorithmic trading. The problem of optimal execution is to find the right balance between desired reward and associated risk. In this talk we will present a couple of optimization models that are used in algorithmic trading.

  3. Marcos C. S. Carreira, BMF Bovespa – Brazil - Dynamics of interest rates in Brazil: matrices and trees.

    Abstract: The dynamics of the overnight interest rate must be modeled through probability transition matrices in order to price correctly derivative payoffs that depend on the monetary policy path. DI futures and IDI options will be defined and studied within this framework, and similarities with other products will be explored.

  4. Paulo Sérgio Cavalheiro, Banco Safra – Brazil - Basel II - Requirements related to proprietary models in banking.

    Abstract: During the last decades, the banking system has developed several tools to control and monitor the risks it faces, given the conditions of the economic and political environments, mainly. In several cases, those tools are mathematical and statistical models that are used for some other purposes required for the activity, like the probability of default, the establishment of accounting provisions, the calculation of economic capital, the setting of operational limits, etc. As we know, banking is, all over the world, a strongly state regulated activity, due the recognized and huge impacts it can cause for the economy, in case of a failure. In this sense, it was established in 1974, at the Bank International Settlements, in Basel, Switzerland, the Basel Committee on Banking Supervision, in order to issue recommendations that might or should be followed by the regulators of the various countries that participate of that committee. Some countries that are members of the Basel Committee have already implemented the standards of the document “Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework”, and others, like Brazil, are in the phase of implementation of those standards. The Basel II agreement allows banks to calculate their regulatory capital by means of proprietary mathematical models, but banks shall have strong internal controls in place, composed of policies and processes, which means, in short, implementing best practices of corporate governance.

    In this sense, it is a fundamental development implementing sound and transparent corporate governance, for the assurance, reliability and adequacy of capital allocation. It is about corporate governance beyond mathematical models applied in banking that we will discuss.

  5. Jorge P. Zubelli, IMPA - Instituto de Matemática Pura e Aplicada, Brazil - Investment Decisions under Uncertainty, Real Options and Quantitative Finance.

    Abstract: Industrial strategic decisions have evolved tremendously in the last decades towards a higher degree of quantitative analysis. Such decisions require taking into account a large number of uncertain variables and volatile scenarios, much like financial market investments. Furthermore, they can be gauged and compared to portfolios of investments such as in stocks, derivatives and futures. This revolution led to the development of a new field of managerial science known as Real Options. The use of Real Option techniques incorporates also the value of flexibility and gives a broader view of many business decisions that brings in techniques from quantitative finance and risk management. Such techniques are now part of the decision making process of many corporations and require a substantial amount of mathematical background. In particular, they require many tools from stochastic control and partial differential equations. In this talk we shall survey some basics of this approach as well as some applications in industrial problems.

 

W4 - Math Modeling in Medicine – Sean McKee – Strathclyde University – Scotland

Abstract - Mathematical Physiology is a discipline that has come of age. These five talks are at the cutting edge of the field. Brian Sleeman will propose a new model of sprouting angiogenesis (the vascularisation of a tumor) through the use of stochastic differential equations thereby verifying a number of existing techniques and models. The understanding of the  biological description of blood coagulation has undergone a dramatic change in recent years. Antonio Fasano will review and discuss the history of the attempts at modeling this difficult and complicated biological process. Modeling the circulation system can be extremely complex and decisions on how to multiscale can be crucial. Pablo Blanco will propose a decomposition approach to this computational hemodynamic problem and demonstrate its robustness and suitability. Transcapillary filtration involves flow through porous membranes. Siv Silaloganathan will employ Biot's classical consolidation theory this will allow, for the first time, the stress distribution across the capillary to be obtained. Drugs are often incorporated in a polymer surrounding the metal cage of a stent to inhibit inflammation of the endothelial cells. Sean McKee will provide a simple model of drug eluting stents: a special analytic solution will be obtained; and the results of a sensitivity analysis will be discussed particularly with reference to what the model tells us about the physiology.

Speakers

  1. Antonio Fasano Dipartimento di Matematica U.Dini – Italy - Modeling blood clotting: a review and some perspectives.

    Abstract: Blood coagulation is the process that leads to the formation of a clot (thrombus). The clot is a gel formed by a polymeric network entrapping blood constituents. Its task is to seal blood vessels injuries, thus stopping bleeding. The process must be very effective, but has also to come to a stop before the clot grows to a size that would produce the vessel obstruction. The clot has to stay in position for a time which is sufficient for the wound healing (which is a different process involving the reconstruction of the damaged tissue), but after that it has to be dissolved. Actually coagulation and dissolution take place simultaneously, but over different time scales, also because the latter has to occur very smoothly, so to prevent that fragments of the clot enter the bloodstream (with the risk of producing pulmonary embolism or strokes). Therefore the regulation of the whole mechanism has to be very precise and all “mistakes” can have fatal consequences. The process that start at the injury site and goes through the following steps (rapid growth, termination, dissolution) is incredibly complicated. It involves the activation of platelets (tiny anucleated cells able to perform many operations) and it exploits a cascade of chemical reactions in which a large number of chemicals intervene. Any defect in the cascade may lead to more or less serious bleeding disorders.

    The biological description of blood coagulation has undergone a dramatic change in recent years. I will report the one which is presently accepted and I will shortly review some history. Concerning mathematical modeling, I will mention the main modern approaches (sometimes strongly diverging) and I will explain why we badly need some new ideas.

    Most of the material will be taken from A. FASANO, R. SANTOS, A. SEQUEIRA. Blood coagulation: a puzzle for biologists, a maze for mathematicians. In MODELLING PHYSIOLOGICAL FLOWS - D. Ambrosi, A. Quarteroni, G. Rozza, Editors, Springer (2011), to appear.

  2. Sean McKee – University of Strathclyde – UK - Modelling Drug-eluting Stents.

    Abstract: This talk has essentially two parts.

    Firstly a family of (one dimensional) mathematical models is studied describing the elution of drug from a polymer-coated stent into the arterial wall. The models include the polymer layer, the media, the adventitia, a (possible) top-coat polymer layer and an atherosclerotic plaque. We investigate numerically the relative importance of transmural convection, diffusion and drug-dependent parameters in the drug delivery and deposition; in addition we investigate how the release rate from the stent can be altered and examine the effect on cellular drug concentrations.

    Secondly, if the release rate from the polymer is known we show, using Laplace transforms, that an analytic solution for the concentration in the target cells and the interstitial region of the media may be obtained. (This involves the usual Bromwich contour modified to cope with three branch points.) Pontrelli and de Monte's model (and analytic solution) for drug concentration in both the polymer layer and the media is introduced; and it is shown, by combining the two models, that the release rate may be obtained from a second kind Volterra integral equation.

  3. Siv Sivaloganathanan - Dept of Applied Mathematics- University of Waterloo – Canada - A Poroelastic Model of Transcapillary Filtration.

    Abstract: Starling's seminal work on the absorption of fluids from connective tissue spaces (and Starling's hypothesis that the energy for transcapillary flow lies in the difference between hydrostatic and osmotic pressures across the capillary wall) has long formed the basis of experimental physiology. Related recent experimental evidence points to a more active role of the interstitium in controlling interstitial fluid pressure (IFP) which has significant implications for clinical oncology. In light of these considerations, it is clearly of importance to reconsider the relationship between IFP and transcapillary transport, in addition to the regulation of IFP in normal tissue. In this talk, we adopt the Michel-Weinbaum viewpoint on the locality of Starling forces and model the capillary wall as a poroelastic solid, using Biot's consolidation theory. However, the incorporation of the Michel-Weinbaum hypothesis requires an extension of Darcy's law to include the effects of oncotic pressure to describe the mechanism of filtration through the capillary wall. A unique feature of this model of transcapillary flow is its ability to predict the stress and starin distribution across the capillary wall, which to our knowledge has not been attempted before.

  4. Brian Sleeman – University of Leeds – UK - Modelling Angiogenesis through Stochastic Differential Equations.

    Abstract: It is well known that avascular tumours only grow to a limited size before metabolic demands are impeded due to the diusion limit of oxygen and other nutrients. For continued growth the tumour switches to an angiogenic phenotype that induces sprouting of new blood vessels from the surrounding medium. Sprouting angiogenesis is the most widely studied aspect of neovascular growth and has been modeled from several mathematical points of view. In this paper we propose a new underlying theme which unifies a number of the existing techniques employed to model angiogenesis. The basic formulation is in terms of stochastic dierential equations. The ideas discussed have wide application; particularly in the validation of models of vessel co option, vasculogenic mimicry and lymphangiogenesis.

  5. Pablo Blanco – LNCC- Brazil - A decomposition approach for heterogeneous modeling in computational hemodynamic.

    Abstract: This work presents a generic black-box approach for the strong iterative coupling of dimensionally-heterogeneous flow models in computational hemodynamics. A given heterogeneous model is formed through the contribution of a set of different homogeneous models. Each homogeneous model is regarded as a black-box. The main concern here is with the use of an efficient iterative algorithm to solve the system of non-linear interface equations that perform the coupling between the different models. The proposed algorithm is employed to split a coupled 3D-1D-0D closed-loop model of the cardiovascular system into the corresponding black-boxes standing for the 3D (specific vessels), 1D (systemic arteries/peripheral vessels) and 0D (venous/cardiac/pulmonary circulation) models. Examples of application are presented showing the robustness and suitability of this novel approach.

 

W5 - Mathematical Models in emerging and re-emerging diseases – Prof. Eduardo Massad – Medical School – USP

Speakers

  1. Hyun Mo Yang - UNICAMP, Brazil - Mathematical modeling of immune response against Trypanosoma cruzi.

    Abstract: We develop a mathematical model to assess immune response against Trypanosoma cruzi infection. The model compares the action of the humeral and cellular immune responses. The model shows that the non-trivial equilibrium, which is unique, is always stable (locally and asymptotically), except in the case of the cellular response, when limit cycles appear if the proliferation of the activated CD8 T cells is increased.

  2. Francisco Coutinho – University of São Paulo – Brazil - The effect of the history of the infection within individual host on its propagation in the population.

    Abstract: In this work we consider the effect of the history of the infection within individuals host on the propagation of the infection in the population. We construct a nested model to describe this effect. The effect of the history of the individual host appears in the function that describes the contact rate between individuals and the probability that a susceptible individual become infected when contacting an infected one. The probability of infection depends on how old is the infection within the infected hosts.

    In this paper we consider one infectious agent first and elaborate a model to consider more than one infectious agent and competition between them.

  3. Marcos Amaku – University of São Paulo – Brazil - Modeling the competition between viruses in a complex plant-pathogen system.

    Abstract: In this work we propose a mathematical framework that describes the competition between two plant virus strains (MAV and PAV) for both the host plant (oats) and their aphid vectors. We found that although PAV is transmitted by two aphids and MAV by just one, this fact, by itself, does not explain the complete replacement of MAV by PAV in New York State during the period from 1961 through 1976, an interpretation that is in agreement with Power’s theories (1996). Also, although MAV wins the competition within the aphids, we assumed that in 1961, PAV mutated into a new variant such that this new variant was able to overcome MAV within the plants during a latent period. As shown below, this is sufficient to explain the swap of strains, that is, the dominant MAV was replaced by PAV, also in agreement with Power's expectations.

  4. Eduardo Massad – University of São Paulo – Brazil - Modeling the Efforts to Control Dengue.

    Abstract: In this work we present what we think is the simplest model that encapsulates all the important variables related to dengue control and analyze several control strategies against dengue. We present the basic model that describes the dynamics of dengue infection and deduce thresholds for elimination of the disease. In particular we deduce the Basic Reproduction Number for dengue. We also present an analysis of the impact of several vector-control strategies on the Basic Reproduction Number of dengue. Next, we present a model to assess the impact of releasing genetically modified sterile male mosquitoes on the dengue. In addition we show a simple model that describes a possible biological control of Aedes mosquitoes, exemplified by the introduction of Wolbachia-infected mosquitoes in a given region affected by dengue.

    Finally, we analyze a model to investigate the effect of the introduction of a monovalent vaccine against dengue on the incidence of the infection. The analysis of the models presented allows us to conclude that of the available vector control strategies adulticide application is the most effective, followed by search and destroying breeding places and larvicides; releasing transgenic males is an effective strategy but it is environmentally and financially expensive; Wolbachia infection of Aedes is very effective against dengue; Vaccination is effective but the vaccine must be tetravalent.

 

W6 - Mathematical Methods in Space Dynamics - Sylvio Ferraz Mello - IAG- USP – Brazil

Abstract – The advances in the field of dynamical systems led to the development of innovative methods and techniques for investigating stable and chaotic motions. The application of these new methods to the classical 3-body problem led to the discovery of a great deal of trajectories, novel space ways and orbital configurations for artificial satellites and spacecraft. The increasing understanding of the available mission options has emerged due to the theoretical, analytical and numerical advances in many aspects of libration point dynamics. The orbits around the stationary libration points of the Earth-Moon and Sun-Earth systems are being used in the planning of missions for solar and astronomical observation. They were used in the design of some of the current missions as MAP, SOHO and Genesis, and also some of the more challenging future ones, as Darwin and TPF and in the proposition of new low-cost transfer orbits allowing for the move of spacecraft whose missions were terminated to new orbits where they can start new work.

Speakers

  1. Daniel J. Scheeres, University of Colorado at Boulder – USA - Mathematics in Earth Orbit: The Dynamics of Earth's Artificial Orbital Population.

    Abstract: Since the dawn of the space age there has been a growing population of human-made objects in Earth orbit. These include operational satellites, defunct satellites, and a large population of debris objects. The density of these objects is now great enough that collisions between them ensure that the orbital debris population about the Earth will continue to increase even if no more satellites are launched. The science of understanding what objects are in orbit about the Earth, what their characteristics are, and what predictions can be made regarding their dynamical evolution is highly mathematical. Solving these problems involves a combination of astrodynamics (the study of motion in space), estimation theory (to determine what their orbits are), optimal control theory (to understand how active satellites can maneuver), modeling (to describe the forces and torques which act on these bodies), and numerical simulation and analysis. All of these topics intensively mix advanced mathematical applications and theories with a pressing issue for the future of human's ability to utilize space. This talk will touch on a few topics of recent research that address some of these questions through the use of advanced mathematics.

  2. Gerard Gomez, University of Barcelona- Spain - The role of Dynamical Systems in spacecraft mission analysis.

    Abstract: It is broadly accepted that the foundations of Dynamical Systems theory were established by H. Poincaré by the end of the XIX and the beginning of the XX centuries. Since then, this theory has used Celestial Mechanics, and -in particular- the Restricted Three Body Problem, as one of the main test beds for its application. But only during the last twenty years most of the Dynamical Systems methods have become a tool for the design of spacecraft missions. Although its application to mission analysis is recent, it has been already used in various missions such as SOHO, Genesis, MAP, Herschel/Plank and Gaia.

    For a given dynamical model, the Dynamical Systems procedures are both quantitative and qualitative give an accurate picture of the phase space of the system. Understanding the geometry of the phase space of the dynamical model considered for a spacecraft mission, and the solution arcs that populate it, the mission designer is free to creative explore concepts and ideas that in the past may have been considered intractable or even had not yet been envisioned. Beyond baseline trajectory design, other analyses required for any mission can also benefit from studies of motion in this regime, for example, low-energy transfers to nominal orbits, station-keeping methods for various mission scenarios, eclipse avoidance strategies or formation flying near libration points.

    In this talk we will give the basic ideas related to the Dynamical Systems tools used in this context and show how they have been applied in several questions of the mission design of SOHO and Genesis.

  3. Ettore Perozzi, Telespazio, Rome, Italy – Space Manifold Dynamics and Industry: Lunar satellite constellations and low-altitude orbiters around the Moon.

    Abstract: The term “Space Manifold Dynamics (SMD)” refers to the various applications of the dynamical systems approach to space mission design. The connection of SMD to classical celestial mechanics and astrodynamics is apparent in several cases, such as the temporary satellite capture of comets by Jupiter and the ballistic capture of a spacecraft by the Moon or the bi-elliptic transfer and certain low energy lunar transfer trajectories. Yet the industrial approach requires SMD to be also cost-effective, that is, the advantages should not be evaluated solely on purely dynamical considerations (typically the gain in the total delta-V budget of a mission). Other issues, such as the launch scenario, the sharing of orbital manoeuvres between the launcher and the spacecraft, the on-board engine performances, the operations complexity and the associated risks must be also taken into account.

    Within this framework if the exploration and the exploitation of the Moon needs to be sustainable, then the associated servicing missions must be realized at an affordable cost. It is then worthwhile to carry out a detailed analysis on the potential benefits associated to the chaotic nature of SMD lunar transfers in two specific cases.

    The deployment of a satellite constellation for communicating with the whole lunar surface is a basic asset for any long-term exploration of the Moon and SMD can be profitably used to this end. It is possible to show that the dynamical properties of internal SMD transfers can be exploited to launch three small spacecraft onboard the same launch vehicle and send them to widely different orbits around the Moon (from equatorial to polar) with no significant changes in the Delta-V budgets.

    Low altitude orbiters for scientific purposes are needed for surveying the surface of the Moon and mapping the complex lunar gravity field. The case for the MAGIA lunar mission, proposed to the Italian Space Agency, is discussed in detail. In particular a trade-off analysis is carried out between Hohmann-like and SMD transfers in order to highlight advantages and drawbacks for both the system engineering and the operational scenario of the whole mission.

  4. Elbert E. N. Macau – INPE, Brazil - Control of Chaos and its Relevancy to Low-Thrust Transfer.

    Abstract: In recent years, a seminal work named Controlling Chaos showed that not only the chaotic evolution could be controlled, but also the complexity inherent in the chaotic dynamics could be exploited to provide a unique level of flexibility and efficiency in technological uses of this phenomenon. These results have also had profound impacts on the field of astrodynamics in which they have been fostered the development of new and innovative techniques to construct low energy spacecraft transfer trajectories. Those techniques exploit the unique characteristics of the chaotic behavior and are developed using the framework provided by the dynamical system theory. As so, the sensitive dependence on initial conditions, which is the hallmark of chaos, allow one to proper change the spacecraft trajectory using small perturbations, while the invariant manifolds structures associated with invariant sets presented in chaotic dynamics can be exploited as a very efficient pathway among regions of the invariant set. In this talk the main ideas regarding control of chaos and targeting in the context of astrodynamics and their mathematical fundaments are lay out. As an application, a low energy transfer to the L4 Lagrangean points is developed. The Lagrangean points L4 and L5 lie at 60 degrees ahead of and behind Moon in its orbit in related to the Earth. Each one of them is a third point of an equilateral triangle with the base of the line defined by those two bodies. These Lagrangean points are stable in related to perturbations. Because of their distance electromagnetic radiations from the Earth arrive on them substantially attenuated. As so, these Lagrangean points represent remarkable positions to host astronomical observatories. However, this same distance characteristic may be a challenge for periodic servicing mission. In this work, we introduce a new low-thrust orbital transfer strategy that opportunistically combine chaotic and swing-by transfers to get a very efficient strategy that can be used for servicing mission on astronomical mission placed on Lagrangean points L4 or L5. This strategy is not only efficient in related to thrust requirement, but also its time transfer is comparable to others known transfer techniques based on time optimization.

 

W7 - Mathematical Modeling in Aerospace Applications- João Luiz F. Azevedo - CTA-IAE – Brazil

Abstract – Mathematical modeling has always played a very important role in many aspects of aerospace engineering. In particular, in the treatment of applications in aerodynamics, aeroelasticity, structural dynamics and aero acoustics, among others, computational modeling has become an integral part of how aerospace vehicles are analyzed and designed in industry. The present workshop will focus on some of the current research in those areas, including the trend towards the use of high-order methods which may be essential for adequately modeling some of the physics in these applications. The workshop will also discuss specific projects which are currently being conducted with strong industrial participation, thus demonstrating the need for mathematical modeling in industry.

Speakers

  1. Juan J. Alonso, Department of Aeronautics and Astronautics, Stanford University, Stanford, California, USA - Design of Low-Boom Supersonic Aircraft with Aerodynamic Performance Considerations.

    Abstract: Research efforts over the past 15 years are making the design of aircraft that can fly supersonically over land a real possibility.  In order to decrease the perceived intensity of the sonic boom at the ground, the volume and lift distribution of an aircraft must be tailored so that the boom signature is appropriately shaped.  For this purpose, design optimization techniques coupled with high-fidelity CFD analyses and adaptive mesh resolution are needed.  This talk describes our recent efforts to develop ad joint-based techniques for the aircraft equivalent area distribution that can be used to both shape the sonic boom and maintain sufficient aerodynamic performance to make the ultimate aircraft viable.  The application of these techniques results in viable design that will be discussed in this presentation.

  2. Zhi Jian (ZJ) Wang, Department of Aerospace Engineering, Iowa State University, Ames, Iowa, USA - The Development of Adaptive High-Order CFD Methods and Their Applications.

    Abstract: A current breakthrough in computational fluid dynamics (CFD) is the emergence of adaptive high-order (order > 2) methods. The leader is the discontinuous Galerkin method, amongst several other methods including the residual distribution, streamlines upwind Petrov Galerkin (SUPG), multi-domain spectral, spectral volume (SV), spectral difference (SD) and correction procedure via reconstruction (CPR) methods. All these methods possess the following properties: k-exactness on arbitrary grids, and compactness, which is especially important for parallel computing on clusters of CPUs and GPUs. In this talk, I will describe several recent developments in discontinuous high-order methods such as the SV, SD and CPR formulations, and highlight the similarities and differences. In addition, the application of high-order methods to compute transitional flow over a SD7003 wing and flow over flapping wings will be presented. The talk will conclude with several remaining challenges in the research on high-order methods.

  3. Julio Romano Meneghini, Department of Mechanical Engineering, Universidade de São Paulo, São Paulo, SP, Brazil - The Brazilian Aircraft Initiative: An Aero-acoustic Investigation.

    Abstract: The main goal of this investigation, which is one of the research projects in Fapesp's PICTA (Program of Innovation in Aerospace Science and Technology), is to investigate and develop solutions for aircraft noise. This problem is tackled from three distinct, but still complementary, approaches: computational aero-acoustics (CAA), empirical and analytical models, and experimental flight and wind tunnel tests. The summary of the project is the development of analysis methodology that estimates the generation and the propagation of aircraft noise. The intent is to group academic and technical teams, which already have experience in aerodynamic problems, and recently in aero-acoustics, to propose solutions in one of the approaches cited above. Our intention is to integrate these efforts, to advance the knowledge due to the complementarities of the three approaches, and, at last, to support solutions to question raised from the point of view of the engineering problem associate to aircraft noise. In the end, the project’s concept exploits the complementary points of view provided by the distinct approaches.

  4. William Roberto Wolf, Aerodynamics Division, Instituto de Aeronáutica e Espaço, São José dos Campos, SP, Brazil - Numerical Investigation of Airfoil Self-Noise Generation and Propagation.

    Abstract: The investigation of airfoil noise generation and propagation is of paramount importance for the design of aerodynamic configurations such as wings and high-lift devices, as well as wind turbine blades, fans and propellers. The present study of airfoil self-noise concerns the broadband noise that arises from the interaction of turbulent boundary layers with the airfoil trailing edge and the tonal noise that arises from vortex shedding generated by laminar boundary layers. The combined direct numerical simulation of both noise generation, and its subsequent propagation to the far field, is prohibitively expensive due to resolution requirements. Therefore, hybrid methods are typically employed, in which computational fluid dynamics is used to calculate the near flow field quantities responsible for the sound generation, which are in turn used as an input to a propagation formulation that calculates the far field sound signature. In this work, near field acoustic sources are computed using large eddy simulation and aero acoustic predictions are performed by acoustic analogy. Numerical simulations are conducted for a NACA0012 airfoil for different flow.

  5. Luís Carlos de Castro Santos - Simulations & CFD - Systems Engineering - Embraer - São José dos Campos, SP, Brazil - Mathematical Modeling in Systems Engineering.

    Abstract: The development of complex engineering systems extensively relies on the use of mathematical models. Systems comprise the collection of hardware, software and people coordinated to perform an expected function. Being able to forecast and tailor the behavior of a system in advance require the ability to model it and its interfaces in a recursive way (system-of-systems). This talk will describe the mathematical elements of systems engineering supported by applications of the aeronautical industry.

 

W8 - Modelling of Multiphase Flows - Luis Portela – Delft- Netherlands

Abstract – Many industrial problems rely on the modeling of different types of multiphase flows, such as immiscible bubbly flows, slug flows, churn flows and particle laden unsteady flows. In this symposium, top scientists will discuss about the challenges in modeling and simulating multiphase flows. Gretar Tryggvason will present the modeling of flows with different immiscible fluids with a multiscale approach. Among the different scales in simulating bubbly flows, his talk will focus on the simulation of deformable bubbles in a turbulent channel, modeling features such as thin films filaments, and resolving the largest scales by direct numerical simulation. On the other hand, many industrial applications such as sediment deposition, involves large amount of particles present in a transient turbulent flow. Recent developments in the mathematical modeling and numerical simulations will be presented by Olivier Simonin. Raad Issa will give an insight about modeling intermittent flows such as slug, wavy and churn flows, by one-dimensional transient two-fluid models.  Also, one-dimensional modelings of multiphase flows are the concern of Chris Lawrence, whom will address the theoretical and practical challenges in modeling such flows using coarse grids.

Speakers

  1. Gretar Tryggvason - U. Notre-Dame, USA - Multiscale Issues in DNS of Multiphase Flows.

    Abstract: Direct numerical simulations (DNS) of multiphase flows, where every continuum length and time scale is fully resolved; have now advanced to the point where it is possible to study in considerable detail fairly complex systems, such as the flow of hundreds of bubbles, drops, and solid particles. Here we discuss such simulations from a multi-scale perspective, focusing on two aspects: First of all, DNS results can help with the development of closure relations of unresolved processes in simulations of large-scale “industrial” systems. As an example we discuss recent results for deformable bubbles in weakly turbulent channel flows. The lift induced lateral migration of the bubbles controls the flow, but the lift is very different for nearly spherical and more deformable bubbles, resulting in different flow structures and flow rates. Nevertheless, the results show that the collective motion of many bubbles leads to relatively simple flow structure in both cases, emphasizing the need to examine as large a range of scales as possible. The other multi-scale aspect results from the fact that multiphase flows often produce “features” such as thin films, filaments, and drops that are much smaller than the “dominant” flow scales. The geometry of these features is usually simple, since surface tension effects are strong and inertia effects are relatively small. In isolation these features are therefore often well described by analytical or semi-analytical models. Recent efforts to capture thin films using classical thin film theory, and to compute mass transfer in high Schmidt number flows using boundary layer approximations, in combination with direct numerical simulations of the rest of the flow, are described.

  2. Olivier Simonin - IMFT, France - Mathematical modeling and numerical simulation of dilute or dense particle laden unsteady flows.

    Abstract: Missing.

  3. Luís M. Portela - Delft University of Technology- Delft - Netherlands - Computer Simulations of Multiphase Flows: Possibilities, Limitations, and Challenges.

    Abstract: Multiphase flows are important in many situations, both in the environment and in the industry. In the environment they occur, e.g.: in the spill of oil in the sea, the transport of sediment in rivers and the ocean, in the transport of aerosols in the atmosphere, and in the dynamics of droplets inside clouds. In the industry they occur, e.g.: in oil and gas production and transport, in chemical processes, in nuclear energy, in steel production, in mining, and in food production. Actually, most industrial flows are multiphase flows.

    Multiphase flows usually involve a multitude of phenomena at a wide range of length and time scales, and its modeling imposes severe challenges from both a physical and a mathematical perspective. From a physical perspective, the challenges range from the sometimes still poorly understood complex individual phenomena, to the multitude of phenomena involved and their multiple interactions, sometimes with rather unexpected and important consequences.

    From a mathematical perspective, the challenges range from the theoretical aspects associated with the structure of the equations resulting from the physical models, to the development of efficient numerical algorithms able to deal with large multi-scale phenomena.

    In this presentation, we give an overview on the usage of computer simulations in multiphase flows. We present a few examples, and use them to illustrate its possibilities and limitations, and the challenges that need to be addressed.

  4. Chris Lawrence - Chief Scientist, Institute for Energy Technology - Norway - One-dimensional multiphase flow models on a coarse grid – theoretical and practical issues.

    Abstract: When simulating multiphase flow in long pipelines, it is natural to consider simplifying the models to a one-dimensional form, and this has proved to be a very successful strategy. 30 years ago, IFE used expertise derived from nuclear energy research to produce a simulator for two-phase pipeline flows of gas and oil - that simulator has since evolved and grown through continued research and development into a very successful commercial software application. This presentation will address some of the theoretical and practical challenges imposed by the constraints of a one-dimensional modeling approach and the use of a coarse grid. Significant progress has been made in the modeling of separated (stratified) flow, but the indications are that there is still much to be gained from further research on flows with dispersions and intermittent (slug, churn, large wave) flows.

 

W9 - Math Modeling of New Materials - Osni Marques- University of Berkeley

Abstract – Advances in modeling, algorithms and computational techniques have greatly contributed to the study and understanding of materials, potentially leading to the creation of new materials, processes and devices that can be used in a variety of applications, including medicine, electronics, energy production and storage, and light weight, durable and highly resistant construction. Of particular interest is the study of materials at the nanometer scale (or nanoscale): at this scale the properties of the material cannot be treated by bulk model, they are modified by finite size quantum mechanical effects. Although the behavior of matter is essentially determined by many body Schrodinger’s equation, in practice the exact solution of this equation is not feasible. Therefore, over the years approximations for Schrodinger’s equation have been developed, and these approximations can then be solved through diagnalization (i.e. eigenvalues) or (in some cases) optimization techniques. However, systems of practical interest are usually very large, thus creating opportunities for the investigation of efficient algorithms and numerical techniques. In addition, as we move towards computer architectures that are intrinsically parallel, the cost of a floating-point operation versus the cost of moving data cannot be neglected. This mini-symposium will focus on topics that impact the simulation and realization of materials, including recent developments in the modeling of ceramic semiconductors, applications in cementious materials for performance and resistance, and numerical algorithms to speed up the time-consuming steps of electronic structure calculations, and related high-performance computer codes for realistic simulations.

Speakers

  1. Haroldo Bernardes - UNESP - Ilha Solteira – Brazil - Mathematical Modeling of Alkali Aggregate Reaction in Concrete.

    Abstract: The models of behavior of concrete affected by Alkali Aggregate Reaction (AAR) play a fundamental role in the design, construction and monitoring of large structures such as dams and bridges. Although this phenomenon has been known since 1940, researchers still seek to understand the causes and mechanisms of development of the chemical reactions that occur inside the micro-and nanostructure of the material, causing expansion, cracking and therefore costly damage. Several types of models have been used to represent this problem with microstructural and phenomenological approaches, using parameters of various types with or without physical meaning. Associated with advanced metering and monitoring systems deployed in large structures, models have been fundamental for the control and decision making management of large structural systems. This presentation will show some cases of structures affected by AAR, the main types of models currently applied, and the main points that are still open to research in the modeling of this problem.

  2. Márcio Lima do Nascimento - Universidade Federal do Para, Brazil - Fractals and Scaling Applied to the Mechanics of Materials Math Models.

    Abstract: The analysis of physical and chemical phenomena that affect materials from a multiscaling perspective has been empowered over the years by molecular-scale studies and observations, with important implications in science, technology and industry. This presentation addresses the problem from the theory of fractals, after an introduction to dimensional analysis and self-similarity. Examples are given to factors of scale both in the strict geometry problems and in applied problems, approaching some topological characteristics of fractals. We will discuss the concept of size and use of statistics tools for modeling, given the heterogeneity of natural materials. We will also discuss Julia and Mandelbrot sets, fractals related to iterations of complex functions, and aspects of self-similarity of these sets. As examples of applications of the scale factors, we will discuss self-similarity and dimension to the study of deformation and fracture models in computational nanomechanics, with a potential use in engineering problems.

  3. Jean-Luc Fattebert - Lawrence Livermore National Laboratory.- USA. Some Numerical Challenges in Large Scale Simulations of Condensed Matter, from Atomistic to Microstructure Computations.

    Abstract: Large parallel computers are becoming ubiquitous in many scientists' lives. While enabling new research into larger and more realistic problems, it also presents challenges. To efficiently use those resources, one has to deal with issues such as parallel scaling and load balancing, but also code complexity and maintenance. Algorithm complexity is sometimes also an issue since complexity beyond O(N) can limit the benefit of  increasing computer resources. I will discuss some of those challenges in condensed matter simulations I have been involved in, ranging from atomistic First-Principles molecular dynamics to Phase-Field modeling of microstructures. I will show how general software tools can help in some cases, and how other challenges require a completely different algorithm.

  4. George Fann - Oak Ridge National Laboratory - USA - The Computational Structure and Mathematics of MADNESS

    Abstract: I will present the mathematical background and the parallel run-time environment of the software MADNESS.  MADNESS is an adaptive pseudo-spectral approach to solving differential and integral-differential equations based on multiresolution analysis and low-separation rank approximation of functions and operators in 1-6D.  The mathematical construction and the software environment permit independent adaptive representations for different functions and operators to achieve high accuracy simulations, with guaranteed precision, motivated by electronic structure and density functional theory calculations.  The parallel run-time is a scalable task-based multithreaded programming model based on “Futures,” for hiding latency and automating data dependency management.  Global names and name-spaces with one-sided messaging permit easier construction of dynamic load balancing, data redistribution and work stealing.  MADNESS simulations have scale beyond 64K cores.  MADNESS was award an R&D 100 Prize in 2011.

 

W10 - Stochastic Modeling and Quantification of Uncertainties - Rubens Sampaio – PUC-Rio - Rio de Janeiro – Brazil

Abstract - Recent developments in computational and sensing resources provide us the ability to infer about physical phenomena with increasingly detailed resolutions and to better characterize the interplay between experimentally observed cause and effect.  In many problems of interest, this interplay is best described in a non-deterministic framework, permitting the description of experimental errors and inaccuracies, modeling errors and inadequacies, as well as numerical approximations. These uncertainties conspire, with interpretation and analysis tools, to affect the predictive power of accumulated knowledge.

This workshop will bring together current research efforts attempting to characterize and manage uncertainties in various stages of the prediction process. In particular, research in the following areas will be highlighted:

  • experimental data representation

  • data assimilation and inverse analysis

  • uncertainty propagation

  • non-deterministic computational modeling

  • optimization and design under uncertainty

Speakers

  1. Fernando Rochinha – Federal University of Rio de Janeiro BrazilUncertainty Quantification in Flow-Structures Interaction.

    Abstract: The increasing complexity involved in engineering systems has been, frequently, tackled with the use of sophisticated computational models. That, from the decision maker’s standpoint, requires the use of robust and reliable numerical simulators. Often, the reliability of those simulations is disrupted by the inexorable presence of uncertainty in the model data, such as inexact knowledge of system forcing, initial and boundary conditions, physical properties of the medium, as well as parameters in constitutive equations. These situations underscore the need for efficient uncertainty quantification (UQ) methods for the establishment of confidence intervals in computed predictions, the assessment of the suitability of model formulations, and/or the support of decision making analysis. The traditional statistical tool for uncertainty quantification within the realm of Engineering is the Monte Carlo simulation. This method requires first, the generation of an ensemble of random realizations associated to the uncertain data, and then it employs deterministic solvers repetitively to obtain the ensemble of results. The ensemble results should be processed to estimate the mean and standard deviation of the final results. The implementation the Monte Carlo is straightforward, but its convergence rate is very slow (proportional to the inverse of the square root of the realization number) and often infeasible due the large CPU time needed to run the model in question. Other technique that has been applied recently is the so called Stochastic Galerkin Method (SG), which employs Polynomial Chaos expansions to represent the solution and inputs to stochastic differential equations. A Galerkin projection minimizes the error of the truncated expansion and the resulting set of coupled equations is solved to obtain the expansion coefficients. SG methods are highly suited to dealing with ordinary and partial differential equations, even in the case of nonlinear dependence on the random data. The main drawback with SG relies on its need of solving a system of coupled equations that requires efficient and robust solvers and, most importantly, the modification of existing deterministic code. This last issue entails difficulties on using commercial or already in use codes. A non-intrusive method, referred to as stochastic collocation (SC), arises towards addressing this point. SC methods are built on the combination of interpolation methods and deterministic solvers, likely Monte Carlo. A deterministic problem is solved in each point of an abstract random space. Similarly to SG methods, SC methods achieve fast convergence when the solution posses sufficient smoothness in random space. Here, particular emphasis is placed on investigating uncertainty propagation in the nonlinear response of flow-structures interactions. It is important to remind that waves and currents, major agents in the dynamics of the floating structures, are usually modeled as random processes. Therefore, stochastic modeling seems to offer an appropriate framework to tackle the external forces and uncertainties in the data, like, for instance, damping and boundary conditions. Here, the flow-structure interaction is modeled in a simple way focusing the assessment of an SC method as an effective tool for uncertainty quantification. The interaction is introduced by means the Morison’s formula, which represents a challenge, despite the simplicity of the model itself, as far as the input is a nonlinear function of the random variables. Those variables represent the phase angle which inherent to the time series description of the wave induced motion.

  2. Sergio Bellizzi – CNRS and Vice President LMA- Marseille – France Orthogonal decomposition methods for dynamical analysis of random mechanical systems

    Abstract: Orthogonal decompositions provide a powerful tool for random vibrations analysis. In case of dynamical structures, modal analysis is used extensively but its validity is limited to linear structures. New developments have been proposed in order to examine nonlinear systems. The most popular orthogonal decomposition is the Karhunen-Loève Decomposition (KLD) also named Proper Orthogonal Method (POM). The KLD is a statistical analysis technique for finding the coherent structures in an ensemble of spatially distributed data. The structures (or KL modes) are defined as the eigenvectors of the covariance matrix of the associated random field. The KL-modes are orthogonal and the KL mode associated with the greatest eigenvalues is the optimal vector to characterize the random response. Considering the spectral density matrix function in place of the covariance matrix, a Spectral Proper Transformation (SPT) can be obtained based on the eigenvectors of the spectral density matrix. These eigenvectors named SP modes which are frequency dependent are in general complex and orthogonal. Recently, a modified KLD named Smooth Decomposition (SD) has been proposed. The SD can be view as a projection of an ensemble of spatially distributed data such that the vector direction of the projection not only keeps the maximum possible invariance but also the motions resulting along the vector directions are as smooth as possible in time. The vector directions (or smooth modes) are defined as the eigenvectors of the generalized Eigen problem defined from the covariance matrix of the random field and the covariance matrix of the associated time derivative random field. The smooth modes are orthogonal with respect to the covariance matrices of the random field and also the associated time derivative random field. In this paper, the definition and properties of the KLD, SPV and SD methods will be presented and compared. It will be shown that these methods are interesting tools to analyze linear as well as nonlinear random mechanical systems. The modes extracted from the decomposition methods may serve two purposes, feature extraction by revealing relevant but unexpected structure hidden in the data and order reduction by projecting high dimensional data into a lower dimensional space. We first focus on the physical interpretation of the modes associated to the three approaches considering both discrete and continuous mechanical systems. Next, the ability of reduced-order models based on the families of modes to approximate the response of mechanical systems will be discussed considering both linear and nonlinear systems.

  3. Olivier Le Maitre - CNRS – France - Generalized Spectral Decompositions for Computational Fluid Dynamics Models Involving Random Data

    Abstract: Many fluid flow models involve uncertain data (such as boundary conditions, forcing terms, model parameters) that can be modeled as random quantities. As a result, the model solution becomes random and effective strategies have been proposed for its resolution, in particular the Stochastic Finite Element Method (SFEM, Ghanem and Spanos, 1991). Depending on the complexity of the random data model, the SFEM can be very costly both in terms of computational times and memory requirements. We propose an alternative approach, called the Generalized Spectral Decomposition (GSD) which is  based on an approximation of the solution as a series of terms consisting in products of deterministic functions -in the physical variables: space, time- with random functional in the stochastic variables. For this separated expansion, the method can be understood as a reduced basis approximation where, contrary to the classical SFEM, the stochastic functional (i.e. the stochastic basis) are not selected a priori, but are determined together with their deterministic counterpart in order to minimize the stochastic equations residual. Different algorithms are presented for the computation of the GSD separated expansion.

    GSD is applied to the incompressible Navier-Stokes equations to demonstrate its effectiveness on non-linear problems and investigate its robustness. We also discuss in detail the velocity-pressure coupling and propose and contrast different algorithms for the approximation of the stochastic pressure field.

  4. Rubens Sampaio – PUC – Rio de Janeiro – Brazil - Drill-string nonlinear dynamics: deterministic and stochastic models.

    Abstract: The presentation analyzes the nonlinear dynamics of a drill-string including uncertainty modeling. A drill-string is a slender flexible structure that rotates and digs into the rock in search of oil. A mathematical-mechanical model is developed for this structure including fluid-structure interaction, impact, geometrical nonlinearities and bit-rock interaction. After the derivation of the equations of motion, the system is discretized by means of the Finite Element Method and a computer code is developed for the numerical computations using MATLAB. The normal modes of the system in the prestressed configuration are used to construct a reduced order model for the system. To take into account uncertainties, the nonparametric probabilistic approach, which is able to take into account both parameter and model uncertainties, is used. The probability density functions related to the random variables are constructed using the Maximum Entropy Principle and the stochastic response of the system is calculated using the Monte Carlo Method. A novel approach to take into account model uncertainties in a nonlinear constitutive equation (bit-rock interaction model) is developed using the nonparametric probabilistic approach. To identify the dispersion parameter of bit-rock interaction model, a methodology is proposed applying the Maximum Likelihood Method and a statistical reduction in the time domain (using the Karhunen-Loève decomposition). Finally, a robust optimization problem is performed to find the operational parameters of the system that maximizes its performance, respecting the integrity limits of the system, such as fatigue and instability.

 

W11 - Workshop In Math in Industry – José Alberto Cuminato –ICMC- USP - Brazil – Part I

Abstract – This workshop will discuss problems arising from industry and that became important for the development of the Math in Industry discipline. All the presenters are leading scientists and very experienced in developing partnership with industry. It is the aim of this mini symposium to show how math in industry developed in several countries and how the different groups approached industry.

Speakers

  1. Kees Vuik - Delft Institute of Applied Mathematics - Delft - The Netherlands. Fast solvers for seismic problems.

    Abstract: We consider an iterative solver for the discretization of the Helmholtz equation. Ingredients in our work are the shifted Laplace preconditioner and deflation. The development of the shifted Laplace preconditioner for the Helmholtz equation was a breakthrough in the development of efficient solution techniques for the Helmholtz equation. The distinct feature of this preconditioner is the introduction of a complex shift, effective introducing damping of wave propagation in the approximate solve. This preconditioner was extensively discussed in various texts and applied in a number of different contexts. Although the resulting algorithm is efficient and robust it is not truly scalable. The bigger the wave number, the closer the smallest eigenvalues are to zero, hampering the convergence. The idea of projection has been used since long to deflate unfavorable eigenvalues. By removing the components of the eigenvectors corresponding to unwanted eigenvalues, better convergence for CG and GMRES has been reported in various texts. In the idea of deflation combined with shifted Laplace preconditioner is proposed, which leads to a ’nearly’ scalable Helmholtz solver, in the sense that the number of iterations does not depend upon parameters. We provide a convergence analysis for a simplified two grid method. We perform a Fourier two-grid analysis of one-dimensional model problem with Dirichlet boundary conditions discretized by a second order accurate finite difference scheme. The components analyzed are the shifted Laplace preconditioner used as smoother, full-weighting and linear interpolation inter-grid transfer operators, and a Galerkin coarsening scheme. This Fourier analysis results in a closed form expression for the eigenvalues of the two-grid operator. These expressions show that the spectrum is favorable for convergence of Krylov subspace methods. We also apply the deflated shifted Laplace preconditioner to two-dimensional model problems method with constant and non-constant wave numbers and Sommerfeld boundary conditions discretized by second order accurate finite difference scheme on uniform meshes. Numerical results show that the number of GMRES iterations is ’nearly’ wave number independent. Finally, in order to speed up the shifted Laplace preconditioned Krylov method we implement the algorithm on various GPU’s. It appears that all the ingredients are suitable for a GPU processor where the multigrid method to approximate the inverse of the shifted Laplace preconditioner is the most challenging part.

  2. John OckendonOxford University – Oxford – UK- The Pantograph Equation: a Paradigm for Maths-in-Industry

    Abstract: This talk will review the impact that a problem posed by the railway industry in 1969 has had on the theory of delay-differential equations and the implications for the math-in-industry community.

  3. Jorge Amaya - Centro de Modelamiento Matemático - Universidad de Chile – Chile – Mine Planning: Finding Optimal Sequences of Extraction by Using Mathematical Models and High Performance Computing (HPC).

    Abstract: Strategic planning plays a very relevant role in the mine business because the general strategy of extraction in the long term determines the subsequent steps in the production process. In particular, the robustness of the proposed sequences can be incorporated for a fine evaluation of the business (quality of data, prices variations, and stochastic behavior of the market). To find the optimal blocks sequence for the extraction of the ore from the mine site is a very complex mathematical problem, having a huge number of decision variables and prohibited computer execution time. Our aim is to establish the good models and to construct computer tools for the generation of robust optimal sequences, in terms of discounted economic value. In this talk, we introduce, for the open pit case, the mathematical formulation of the strategic mine planning problem, we also present some algorithmic proposals to solve it and, finally, we show a practical software tool using HPC.

  4. Graeme Wake - Massey University – Auckland – New Zealand - Modeling of cancer treatment.

    Abstract: Improved treatment of cancer is one of the most important challenges for medical science. Tailoring treatment for individual patients has long been an objective for oncologists. While many biological techniques and mathematical models have been devised to predict the course of treatment, none have applied routinely to clinical oncology. Our model, which describes the complexities of the responses of tumor cells over time to both anticancer drugs and radiation, has considerable impact on our ability to advance individualization of cancer therapy. This process is in advanced stages of implementation. Over the last few years, we have developed sophisticated mathematical equations describing the behavior of cancer cells as they progress through the cell division cycle. Which stage in the cycle the cells are actually in, can be differentiated by their DNA content and this enables model outcomes to be compared directly to experimental results. These equations describe the response of human tumors to chemotherapy and radiotherapy. Firstly we propose a model of the dell-cycle which gives rise to challenges in the non-local calculus involved. We then incorporate programmed cell death (apoptosis) into the model. Then we introducer perturbations of model parameters by treatment and compare model results with data. This research will provide significant new analytical and computational insights into the area of non-local equations, where cause and effect are separated in space and time, as well as underpinning support to oncologists concerned with treatment, drug companies producing drugs and management of clinics.

 

W12 - Workshop In Math in Industry - Yuan Jin Yun – UFPR and Haroldo Fraga Filho - INPE – Brazil – Part II

Speakers

  1. Fredrik Edelvik - Fraunhofer-Chalmers Centre - Gothenburg – Sweden - Modeling and Simulation of Coating Processes in Automotive Industry.

    Abstract: The main processes in automotive paint shops are electro coating, sealing and cavity wax, spray painting and oven curing. The complexity of the processes characterized by multi-phase and free surface flows, multi-physics, multi-scale phenomena, and large moving geometries, poses great challenges for mathematical modeling and simulation. The current situation in the automotive industry is therefore to rely on individual experience and physical validation for improving the paint and surface treatment processes. Having access to tools that incorporate the flexibility of robotic path planning with fast and efficient simulation of the processes would be advantageous, since such tools can contribute to reduce the time required for introduction of new models, reduce the environmental impact and increase quality. In this talk we will present activities in a joint project with Swedish automotive industry aimed at developing novel mathematically based
    simulation software for paint and surface treatment processes. Particular emphasis will be given to the Navier-Stokes solver, which is based on a finite volume discretization on a Cartesian octree grid that can be dynamically refined and coarsened. Unique immersed boundary methods are used to model the presence of objects in the fluid. This enables modeling of moving objects at virtually no additional computational cost, and greatly simplifies preprocessing by avoiding the cumbersome generation of a body conforming mesh. Simulation results from coating processes as well as other industrial applications will be presented during the talk.

  2. Amyia Kumar Pani - Indian Institute of Tecnology – Mumbay – India- Adaptive Finite Element Method for Valuation of Multi-Asset American Option.

    Abstract: In this talk, we discuss a posteriori error estimates in maximum norm for Galerkin finite element approximations to the valuation of a multi-asset American option. The mathematical model for pricing an American option gives rise to a parabolic variational inequality with a non-smooth payoff (or terminal condition) on an unbounded domain. We first use truncation of the domain and then apply penalization techniques to reformulate the variation inequality problem in to a semi-linear parabolic problem on a bounded domain. We estimate the penalization error in  maximum norm for smooth  obstacle . Instead of extending the results of elliptic setting we use elliptic reconstruction to use elliptic a posteriori estimates to study parabolic problem. We first derive a posteriori error estimator in the maximum norm for  fully discrete backward Euler finite element approximation. Finally, we discuss the implementation aspect for multi-asset American option.

  3. Javier Jetcheverry Tenaris – Buenos Aires – Argentina- Mathematical modeling for improved non-destructive testing of steel pipes.

    Abstract: Mathematical modeling constitutes a very valuable tool in the effort for ever improved non-destructive testing techniques.In particular, quality requirements for steel pipes for the oil industry increase continuously, because of more stringent in service demands and environmental concerns. We will present several examples where mathematical models help understanding and improving the fundamental inspection techniques (magnetic flux leakage, eddy currents, ultrasound) and discuss some of the conditions for success.

  4. Asla Medeiros e Sá - Fundação Getulio Vargas – Rio de Janeiro, Brazil - Very Important Faces: yet another character annotation tool.

    Abstract: This presentation will describe the ongoing project of creating yet another character annotation tool, the Very Important Faces (V.I.F.) tool. Although the idea of character annotation is really not a new subject, off-the-shelf software annotation tools have proved to be designed for contexts where the assumptions are not the same as in the case of historic photographic catalogs. Thus, the adoption of such tools has shown, in practice, to be below the expectations. The most evident limitation of the majority of the available photo annotation tools is that they do not process information present in captions and texts produced by experts that describe the contents of the photographic collections. Our dataset is constituted of a contemporary historic character photographic collection, with informative captions, available for public access. The design proposal of the V.I.F. tool is to help the experts responsible for collection organization to migrate the information, documented in the texts associated to the images, to W3C metadata standards. The V.I.F Tool implements face detection algorithms. It also detects proper names in previously inserted captions to help the user (expert) match names and faces in order to produce a photo annotation compatible with semantic web principles.

 

W13 - Image Processing and Reconstruction: Models and Methods – Alvaro De Pierro (University of Campinas) and Elias S. Helou Neto (University of São Paulo) - Brazil

Abstract - From medical applications in diagnosis, going through nondestructive testing, remote sensing, ultra microscopy, and many others, image reconstruction and processing gives rise to mathematical models and methods associated to some of the most important modern technologies. What these technologies have in common is the fact that they are modeled as inverse and ill-posed problems. That is, small perturbations in the data generate relatively large perturbations in the solutions, introducing the necessity of different types of regularization. Mathematically speaking, these problems are related to different areas, including, optimization, computational harmonic analysis, approximation theory and integral equations. The goal of this workshop is to present a sample of topics in inverse problems in imaging, by important experts in these fields from some of the best Universities in Europe and the United States, as well as from industry.

The following topics will be addressed A brief introduction to probabilistic and statistical methods for inverse problems and image processing will be presented. Recent developments in a relatively new imaging modality phase contrast tomography, where the parameter to be retrieved is the variation of the phase, instead of attenuation, retrieved in standard X-rays tomography. The challenge of low dose tomography and X-rays fluorescence computed tomography. Recent advances in fluorescence microscopy breaking the diffraction limit.

Speakers

  1. Ali Mohammad-Djafari, Directeur de recherche (Senior Researcher) Laboratoire des Signaux e systèmes, Univeristé Paris Sud, France - Sparsity enforcing prior models and Bayesian approach for signal and image reconstruction.

    Abstract: In this talk, we propose different prior modeling for signals and images which can be used in a Bayesian inference approach in many inverse problems in signal and image processing. The scarcity may be directly on the original space or in a transformed space. Here we consider it directly on the original space (impulsive signals). Between the possible models, we focus on some of them which are either heavy tailed (Generalized Gaussian, Student-t or Cauchy) or mixture models (Mixture of Gaussians, Bernouilli-Gaussian, Bernouilli-Gamma...). Depending on the prior model selected, the Bayesian computations (optimization for the Joint Maximum A Posteriori (MAP) estimate or MCMC or Variational Bayes Approximations (VBA) for Posterior Means (PM) or complete density estimation) may become more complex. We propose these models, discuss on different possible Bayesian estimators, describe the corresponding appropriate algorithms, and discuss on their corresponding relative complexities and performances. We show the results of the proposed methods on some applications such as Signal deconvolution, Image restoration and Computed Tomography.

  2. Patrick La Riviere - Dept. of Radiology, University of Chicago Medical Center, USA - Virtual x-ray histology using multiple metal stains and multi-energy synchrotron microCT.

    Abstract: We are engaged in a research program to develop tools for high-resolution, high-throughput specimen imaging in order to perform ex-vivo phenotyping of model organisms such as zebra fish. Previously we have used single-energy synchrotron computed tomography microscopy (micro CT) to image zebra fish larvae and embryos that had been stained with one or more heavy-metal stains (osmium tetroxide, uranyl acetate). Our goal in this work is to use a multi-energy strategy to differentiate multiple stains in the same fish. The ability to distinguish multiple biologically targeted stains by means of microCT would provide the foundations for three- dimensional x-ray histology, in which high-resolution, “color” images of intact specimens could be obtained. In this presentation we will discuss the mathematical issues related to optimizing stain choice, energy selection, and methods of solving the inverse problem, especially when using polychromatic radiation. Specifically, we make use of sonogram-domain penalized likelihood methods for estimating line integrals through basis materials and Cramer-Rao lower bound methods to optimize stain and energy selection.

  3. Russell Luke - Institut für Numerische und Angewandte Mathematik Universität Göttingen – Germany- Imaging from low-count X-ray diffraction data: variational analysis and algorithms.

    Abstract: We develop the mathematical theory and algorithms for determining the phase of X-rays from low-count and missing diffraction data. This is an instance of the well-known phase retrieval problem and leads to no convex inverse problems. The state-of-the-art methods, known as iterated projection algorithms (e.g. Fienup's Hybrid Input / Output Algorithm or Elser's Difference Map methods), are known to be unstable and do not account for ill-posedness. This is a severe limitation for low-count data and is a barrier to further computational imaging capabilities based on phase-retrieval, such as phase contrast tomography. We discuss the theory of regularity for this problem from a variational perspective and present results of the application of these ideas to operator splitting/projection-based algorithms.

  4. Emil Sidky - Dept. of Radiology, University of Chicago Medical Center, USA - The role of compressive sensing in iterative image reconstruction for computed tomography.

    Abstract: Due to pressures on reducing dose in computed tomography (CT) exams and gains in computational technology, iterative image reconstruction is actively being pursued by the major CT manufacturers. The idea is that iterative methods allow for better data modeling and accounting for data noise properties, potentially allowing for a reduction in the X-ray intensity without loss of image quality. The new field of compressive sensing (CS) may allow for a reduction of scanning exposure by reducing the number of necessary views for image reconstruction. In this talk, the basic ideas of CS will be reviewed and its application to the CT system discussed. The relation of CS methods within iterative image reconstruction will also be explained. As the idea of reduced sampling is central to CS, it becomes important to understand what is meant by full sampling. With an understanding of CT sampling conditions, it then becomes possible to measure potential gains in data reduction with CS methods.

 

W14- Geometry and Simulation on Micro flows and Bioflows - Gustavo Buscaglia – ICMC-USP – Brazil

Abstract – The topic of this workshop is the simulation of fluid mechanics phenomena in which the dynamics is mainly governed by the geometrical configuration, which is itself an unknown of the problem. This is the case of flows with moving interfaces, being of special interest those of microscopic scale. Very simple interfaces, such as that of a clean bubble, are already challenging to simulate. Much effort is devoted nowadays to more complex interfaces, ranging from thermally- or electrically-driven micro drops, to the direct simulation of the mechanics of the cell. The aim of the workshop is to discuss the many challenges involved in the mathematical modeling and numerical approximation of these problems.

Speakers

  1. Pedro Morin, IMAL, Univ. Nac. del Litoral, Santa Fe, Argentina - An adaptive FEM for shape optimization.

    Abstract: We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to  update the boundary, and compute the geometric functional and approximate the state and adjoin equations (via the dual weighted residual method). We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimize the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution.

  2. Miguel Sebastian Pauletti, Texas A&M University, USA - A parametric finite element method for geometric problems: Techniques and applications.

    Abstract: A parametric FEM for free boundary problems is discussed. Some applications include geometric and fluid-membrane interaction in biomembranes. With slight modification the method can be successfully applied in shape optimization problems such as the design of an obstacle with minimal drag or a bypass design. Adaptively becomes tricky, the issue (Geometric Consistency) and a solution is discussed. Then GC can be exploited for a novel algorithm for surface restoration.

  3. Gustavo Buscaglia, ICMC-USP – BrazilFinite elements for the dynamic simulation of viscous membranes.

    Abstract: A finite element method is proposed for the simulation of lipidic membranes. These membranes, which play a key role in living cells, essentially behave as incompressible, viscous two-dimensional fluids flowing on a curved, time-evolving surface. The tangential Stokes operator is approximated using equal-order, linear finite elements, on a tessellation of the surface. This allows for the simulation of relaxation processes of lipidic membranes with general three-dimensional shapes.

  4. Marcio S. Carvalho, PUC-Rio, Brazil - Steady state, transient response analysis and optimization of coating processes.

    Abstract: The deposition of a thin and uniform liquid layer onto a moving substrate, generally referred to as coating process, is an important step in the manufacturing of many different products, such as different optical films used in electronic displays, solar panels and flexible electronics.  The presence of highly curved free surfaces and the small scales of the flows make theoretical and experimental analyses extremely challenging.  Because of the strict uniformity specifications of new products coming to the marked, theoretical analysis of coating flows requires not only predictions of the two-dimensional steady state flows, but also the sensitivity of those flow to small episodic upsets and to ongoing periodic perturbations that cannot be avoided in manufacturing plants.  The complexity of the mathematical formulation and the high computational cost of these advanced analyses have discouraged its use as a general engineering tool for process design and optimization.  We show how advanced analysis of coating flows can be used to optimize the process, leading to more uniform products and also discuss new, more efficient numerical methods that drastically reduce the computational cost of such analyses.


General Talks

 

Beresford Parlett - University of California Berkeley, USA - Title: Finding eigenvalue multiplicites with ARPACK.

Abstract: ARPACK is an excellent package for computing eigenvalues and eigenvectors by Krylov subspace methods using implicit restarts to keep the dimension small.  We discuss how it can sometimes miss wanted eigenvalues or return an incorrect multiplicity.  We also show how to remedy these defects.

 

Sebastián Ceria - AXIOMA – New York USA - Title: Equity Risk Management and Optimization - A challenging relationship.

Abstract: The construction of optimized portfolios in asset management entails the complex interaction between three key entities: the risk factors, the alpha factors and the constraints. The problems that arise due to mutual misalignment between these three entities are collectively referred to as Factor Alignment Problems (FAP). Examples of FAP include risk-underestimation of optimized portfolios, undesirable exposures to factors with hidden and unaccounted systematic risk, consistent failure in achieving ex-ante performance targets, and inability to harvest high quality alphas into above-average IR. In this talk we discuss FAP and propose a solution approach which is based on augmenting the user risk model with a single additional factor, the Alpha Alignment Factor (AAF). We will show how the Alpha Alignment Factor provides a natural and effective remedy to FAP. The Alpha Alignment Factor not only corrects for the risk underestimation bias of optimal portfolios but also pushes the ex-post efficient frontier upwards thereby empowering portfolio managers to access portfolios that lie above the traditional risk-return efficient frontier.

Bio: Dr. Sebastián Ceria is the Chief Executive Officer of Axioma. Before founding Axioma, Ceria was an Associate Professor of Decision, Risk and Operations at Columbia Business School from 1993 to 1998. Ceria has worked extensively in the area of optimization and its application to portfolio management. He is the author of many articles in publications including Management Science, Mathematical Programming, Optima and Operations Research. Most recently, Ceria's work has focused on the area of robust optimization in portfolio management.  He has co-authored numerous papers on the topic, including, "Incorporating Estimation Errors into Portfolio Selection: Robust Portfolio Construction," which was published in The Journal of Asset Management. He is a recipient of the Career Award for Operations Research from the National Science Foundation. Ceria completed his PhD in Operations Research at Carnegie Mellon University's Graduate School of Industrial Administration.

 

Enrique Zuazua - Scientific Director - BCAM - Basque Center for Applied Mathematics, Bilbao, Spain - Title: Flow control in the presence of shocks.

Abstract: Flow control is one of the most challenging and relevant topics are connecting the theory of Partial Differential Equations (PDE) and Control Theory. On one hand the number of possible applications is huge including optimal shape design in aeronautics. On the other hand, from a purely mathematical point of view it involves sophisticated models such as Navier-Stokes and Euler equations, hyperbolic systems of conservations laws, which constitute, certainly, one of the main challenges of the theory of PDE. Indeed, some of the main issues concerning existence, uniqueness and regularity of solutions are still open in this field. Moreover, Control Theory also faces some added difficulties when addressing these issues since the possible presence of singularities on solutions makes often classical approaches fail.

In this lecture we present recent joint work in collaboration with Carlos Castro and Francisco Palacios in which we propose a new alternate direction method that allows not only dealing with shocks but also taking advantage of their presence to make the optimization processes to converge much master.

 

Alvaro Coutinho - Coppe UFRJ - Brazil – Title: Experience on solving offshore engineering problems-engineering science meets real life engineering.

Abstract: Missing.

 

Round Table on Mathematics in Industry

 

John Ockendon – Oxford University, Oxford – UK - Title: Study Groups as a vehicle for kick-starting Mathematics-in-Industry.

Abstract: This talk will describe how Study Groups work and how they have evolved around the world over the past 50 years.

 

Graeme Wake - Centre for Mathematics in Industry, Massey University Auckland, New Zealand - Title: Industrial Mathematics Initiatives in the Southwest Pacific Region.

Abstract: A review of industrial mathematics initiatives undertaken by members of our Centre for Mathematics in Industry, focusing in particular on the recent Mathematics-in-Industry Study Groups run in Australia and New Zealand. Successes and pitfalls will be covered. During 2003-6 I was Director of the Study Groups for ANZ. Now action is focused towards running one-off, specific industry, in-house workshops matching expertise from within and outside of Massey to the client’s problem.

 

Kees Vuik - Director of the Delft Centre for Computational Science and Engineering Delft Institute of Applied Mathematics Delft, The Netherlands - Title: Mathematics in Industry.

Abstract: Sometimes a distinction is made between Pure and Applied Mathematics. Using the following definition: “Pure Mathematics is mathematical knowledge that is not applied yet” it appears that they are the same. At the Delft University of Technology we have a long tradition of combining mathematics with applications and collaboration with other disciplines and industry. More than 50 years ago the master study’ Technische Wiskunde’, which can be translated as ’Industrial Mathematics’, has been founded at our university by Prof. R. Timman. From the start the following courses are included in the program: functional analysis, probability and stochastic, mathematical physics and numerical analysis. Most students also attend courses in computer science and engineering. Many of the master thesis projects are related to problems coming from industry and more than half of the master students have a stay at a company during their master thesis research. Many of the collaborations between mathematics and industry are initiated by student projects. Sometimes such a project answers all the questions and the collaboration stops, but in most cases it appears that such a project is a starting point of collaboration over the years. Typical examples are problems coming from: oil industry (Shell), electronic companies (Philips, ASML), marine industries (MARIN, Damen Shipyards) and large research laboratories (Delft Hydraulics, National Aerospace Laboratory). Starting as an applied mathematics problem is appears that in a number of these projects more fundamental mathematics is needed and developed. In such a case a PhD project is defined, sometimes funded by industry or by the national science foundation (STW). Some of these research projects result in key publications in SIAM journals with many citations. In the talk a number of these examples are given. Finally, 10 years ago the board of the Delft University of Technology founded the Delft Centre for Computational Science and Engineering. This is a collaboration of 5 different research departments ranging from Computational Physics to Computational Civil Engineering. It started with funds for a number of joint PhD projects. Now it continues as a local network to organize workshops, Master and PhD courses (parallel scientific computing, GPU computing, Open FOAM) and as a leader of applying for funds of large research programs. These activities again lead to a closer collaboration with industries and national research laboratories. During this whole period professors of the Delft Institute of Applied Mathematics have been directors of this center.

 

Jorge Amaya - Center for Mathematical Modeling Santiago – CHILE. Title: An international unit of the Centre National de la Recherché Scientifique, CNRS and associate unit of PARIS VI.

Abstract: We introduce the CMM ‐Center for Mathematical Modeling‐, a research center of the University of Chile. The mission of the CMM is to create new mathematics and use mathematics to solve problems coming from other sciences, the industry and public policies. Its aim is to develop science with the highest standards, which also guides its endeavors in industrial research and education. We envision CMM as a world class center of excellence for research and advanced training in applied mathematics, internationally recognized as a platform for mathematical industrial modeling with the highest impact in innovation.