Postdoctoral positions – International call – First call closed.
Post-doctoral positions are available in the areas of research listed below, at the “Center for Computing in Engineering & Science – CCES” at the University of Campinas (UNICAMP), São Paulo, Brazil (http://cces.unicamp.br). The CCES is a multidisciplinary research center dedicated to the development and application of computational methods in computer science, mathematics, physics, chemistry, biology, and engineering. The positions are open for immediate start with a 2 year duration, with possible extension to up to 5 years. The funding is provided by FAPESP (São Paulo Research Foundation – www.fapesp.br) with competitive stipends and auxiliary research funding. The selected candidates will have access to health services at UNICAMP’s School of Medicine Hospital. The successful candidates may also have access to additional funding to spend research time abroad at CCES partner institutions.
The areas of interest are listed below. Click on each area to find further information on the projects to be developed. You may choose up to two different projects to apply in the same application form.
> Towards a data infrastructure for CCES: Inquiry on Data Science and Semantic Interoperability
Title: Integration and interoperability of scientific datasets Summary: The overwhelming amount of heterogeneous data produced by scientific experiments requires appropriate models (and formalisms) suited to express the meaning of data for humans and machines. Data semantics can play a key role in data exchange across scientific repositories. However, there is a lack of techniques to enable semantic retrieval and interoperability of scientific data. The goal of this project is to study methods for publication and integration of scientific datasets taking the semantics of data into account. The challenge is to understand how existing models can be reused/adapted and the issues related to interoperate heterogenous data from one repository to another. This investigation will study the use of semantic models such as ontologies [1] to render the data semantics explicit. We will explore data models such as RDF(S) [2] and principles to publish semantically enhanced scientific data. We will investigate how the cases studies conducted by CCES members can benefit from publishing data based on such models. References: 1 – Guarino, N. Understanding , building and using ontologies. Int . J . Human – Computer Studies (1997) 46 , 293 – 310 2 – https://www.w3.org/RDF/ Researchers involved: Profa. Claudia Maria Bauzer Medeiros (www.ic.unicamp.br/~cmbm), Prof. Julio Cesar dos Reis (www.ic.unicamp.br/~jreis) and other researchers in the CCES concerned with the data infrastructure aspects. Requisites: The candidate should have a PhD in Computer Science or related areas. Experience in database methodologies, data management methods and advanced computer programming skills. Knowledge in ontologies and RDF data format will be considered a plus, as well as experience in working in multidisciplinary projects. |
Title: Scientific data Quality assurance Summary: The FAIR (Findable, Accessible, Interoperable, Reproducible) [1] movement is rapidly expanding all over the world as a means of creating certifiable, curated data sources. Many institutions are adopting these principles to develop their own processes for data cleaning, curation, and storage. The mission of these centers is to develop standards, tools and techniques to ensure that research data produced in these institutions, whenever funded by public money, will meet the principles. This notwithstanding, there are no consensual mechanisms for ensuring data FAIR-ness. The very vagueness of the original definition of the acronym has spawned many interpretations (and implementations), several of which do not result in adequately curated data. The goal of this post-doctoral research is to specify and formalize a suite of methods to ensure curation of data that can be certifiably FAIR. References: [1] M. D. Wilkinson et al. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Nature Scientific Data, May 2016, Available online at https://www.nature.com/articles/sdata201618.pdf Researchers involved: Profa. Claudia Maria Bauzer Medeiros (www.ic.unicamp.br/~cmbm), Prof. Julio Cesar dos Reis (www.ic.unicamp.br/~jreis) and other researchers in the CCES concerned with the data infrastructure aspects. Requisites: The candidate should have a PhD in Computer Science or related areas. Experience in database methodologies, data management methods and advanced computer programming skills. Experience in working in multidisciplinary projects will be considered a plus. |
Title: Experiment Reproducibility and Scientific Workflows Summary: Computational modeling and simulation are characterized by being data-intensive, i.e., they manipulate increasing volumes of complex, dynamic data sources. The goal of this project is to provide a common data infrastructure to all researchers in the Center, supporting sharing and reuse of data and models used and/or produced by the Center . The idea is to adopt the notion of scientific workflows as the basis to specify executable models, and create a common computational platform to design, annotate and reuse such workflows. This will continue a set of ongoing initiatives within the Center towards creating a (reproducible) workflow repository [1], and provide a computational infrastructure to mine these workflows. References: [1]Lucas A. M. C. Carvalho and Khalid Belhajjame and Claudia Bauzer Medeiros, Semantic Software Metadata for Workflow Exploration and Evolution. Proc 14th IEEE eScience Conference, 2018 Researchers involved: Profa. Claudia Maria Bauzer Medeiros (www.ic.unicamp.br/~cmbm), Prof. Julio Cesar dos Reis (www.ic.unicamp.br/~jreis) Requisites: PhD degree in Computer Science or associated domains. Hands-on experience with workflow management systems. Ideally, the candidate must have worked in a multidisciplinary environment, in which computer scientists work with scientists from other domains. Additional information: http://www.lis.ic.unicamp.br/ |
> Machine Learning methods for engineering and sciences
Title: Efficient allocation of cloud computing resources for deep-learning processing Summary: Machine learning algorithms are algorithms that can learn from and make predictions on data. Recently, deep learning algorithms have been successfully used to improve the state-of-the-art in visual object recognition, speech recognition and object detection [2]. Also, deep learning is becoming more and more used to solve engineering and science problems. For example, Esteva et al. [3] applied deep-learning techniques to classify skin lesions to facilitate the detection of skin cancer, and Araujo et al. [4] applied deep learning to detect features on seismic data and improve the quality seismic imaging.Deep learning models require very large training sets, from dozens of thousands up to several million images, to make accurate predictions. As a consequence, training is a computationally costly process and often requires the use high-performance computing systems.Cloud computing providers are allowing users to rent high-performing computing resources to accelerate their applications on the cloud. Moreover, cloud providers usually offer multiple computing resource options with different cost and performance characteristics. For example, some of Microsoft and Amazon’s virtual machines contain GPUs while other contain fast solid-state drives (SSDs) [1]. Nonetheless, selecting the best option for a given scientific or engineering program may be a challenge itself. For instance, the performance of a reservoir simulation software that does not contain code for GPUs is not improved when it is executed on a machine that contains a GPU; however, the cloud provider charges the user by the time the virtual machine is running, even if the GPU is not being used. Hence, using a virtual machine that does not fit well the application needs may cause the user to be overcharged by a factor of up to 240x, since the cost of a virtual machine may range from 0.01 USD to 24 USD per hour. In this context, it is essential to select the proper computational resources to optimize the cost vs. benefit of the system.The goal of this project is to investigate the cost and benefit of using different computational resources from cloud providers to accelerate deep learning training and inference and to develop automatic methods to optimize cloud resources allocation for deep learning applications. References: [1] Netto, M. A. S., Calheiros, R. N., Rodrigues, E. R., Cunha, R. L. F., Buyya, R. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges. ACM Computing Surveys 51 (2018). [2] LeCun, Y., Bengio, Y. and Hinton, G. Deep learning. 436–444, Nature 521 (2015). [3] Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks, vol. 542, no. 7639, 115-118, Nature 542 (2017). [4] Araujo, L., Oliveira, F. M., Faccipieri, J. H., Avila, S., Tygel, M., Borin, E. Automatic diffraction apex region detection using convolutional neural networks. SEG/SBGf Workshop Machine Learning. Rio de Janeiro, Brasil (2018). Researchers involved: Prof. Dr. Edson Borin (www.ic.unicamp.br/~edson), Prof. Dr. Martin Tygel (http://www.ime.unicamp.br/~tygel/) and other researchers developing/using deep-learning based applications on CCES Requisites: Candidates are expected to have: Experience with Cloud Computing, including deploying and using applications on the Cloud; Experience with scripting languages (e.g. python) and computer system administration (Linux). Experience developing or using deep-learning code on GPU based or distributed computer systems is a plus. |
Title: Application of deep learning techniques to improve seismic imaging Summary: Deep-learning techniques are state of the art for numerous Machine Learning and Computer Vision applications [1]. Those techniques have successfully applied in industry products that take advantage of the large volume of digital data. Companies such as Amazon, Baidu, Facebook, Google, and Microsoft are aggressively pushing forward deep learning related projects. Also, deep learning is becoming more and more used to solve engineering and science problems. For example, Esteva et al. [2] applied deep-learning techniques to classify skin lesions to facilitate the detection of skin cancer, and Araujo et al. [3] applied deep learning to detect features on seismic data and improve the quality seismic imaging.Exploration geophysics is extensively applied by the oil industry to understand the subsurface of the earth better, thus raising the chances of finding reservoirs of fossil fuels and minerals. This is a branch of geophysics that aims to characterize the subsurface by conducting measurements at Earth’s surface (Land or Sea).During seismic data acquisition, a source generates waves that propagates through the water and reflects, diffracts and refracts into the subsurface layers. Some of the reflected waves return to the surface and meet hydrophones, which record the data. Once recorded, the seismic data is later processed to improve the signal-to-noise ratio and to construct images that help geologists to understand the meaningful features of the subsurface. The Common Reflection Surface (CRS) algorithm [4,5] is a generalized version of one of the key algorithms used in exploration geophysics to improve the signal-to-noise ratio, the Common Midpoint (CMP) method [6]. Even though there are several algorithms designed over the years to improve the signal-to-noise ratio, seismic data acquired from deep-water pre-salt reservoirs are bringing new challenges to this realm.The main goal of this project is to apply deep-learning algorithms to improve the signal-to-noise ratio of seismic data extracted from deep-water pre-salt reservoirs. References: [1] LeCun, Y., Bengio, Y. and Hinton, G. Deep learning. 436–444, Nature 521 (2015). [2] Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks, vol. 542, no. 7639, 115-118, Nature 542 (2017). [3] Araujo, L., Oliveira, F. M., Faccipieri, J. H., Avila, S., Tygel, M., Borin, E. Automatic diffraction apex region detection using convolutional neural networks. SEG/SBGf Workshop Machine Learning. Rio de Janeiro, Brasil (2018). [4] Mann, J., Jäger, R., Müller, T., Höcht, G. and Hubral, P. Common-reflection-surface stack – A real data example: Journal of Applied Geophysics, 42, 301-318 (1999). [5] Jäger, R., Mann, J., Höcht, G., and Hubral, P. Common-reflection-surface stack: image and attributes. Geophysics, 66, 97-109 (2001). [6] Mayne, W. H. Common-reflection-point horizontal data stacking techniques, Geophysics, 27, 927-938 (1962) Researchers involved: Prof. Dr. Edson Borin (www.ic.unicamp.br/~edson) and Prof. Dr. Martin Tygel (http://www.ime.unicamp.br/~tygel/) and Profa. Dra. Sandra Avila. Requisites: Experience with deep-learning methods. Experience with geophysics is also desirable, but not mandatory. |
Title: Multi-scale Modeling of Hydrogen Storage in Metal-Organic Frameworks (MOFs) Summary: Nanomaterials represent a new and unique opportunity in materials science today. One example of where nanomaterials could be very important in basic science and technological applications are in the field of hydrogen storage. Among several promising nanostructures for hydrogen storage purposes, metal-organic frameworks (MOFs) stand out [1]. One important characteristic of these structures is they are in general very porous, making them ideal candidates to storage atoms and/or molecules. Due to the high reactivity associated with the organic groups, an almost infinite number of these structures can be synthesised. This makes the computational screening extremely valuable. The recent advances in modelling these systems [2] show that computer modelling can be very effective in designing and selecting candidate structures to hydrogen storage. In recent years a significant new number of new MOF structures have been proposed and/or synthesised [1], creating a renewed interest for these structures. Objetives: To investigate the possibility of hydrogen storage in MOF structures; To investigate the performance of different structures; To use of machine learning to screen good candidate structures; To combine different techniques to address aspects at different scales (ab initio for small structures, monte carlo and molecular dynamics for large structures and macroscale models with finite elements methods. References: 1. J. W. Colson, Science 332, 228 (2011). 2. J. R. Holst, A. Trewin and A. I. Cooper, Nature Chem. 2, 915 (2010). Researchers involved: Prof. Munir S. Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/) Requisites: Experience with monte carlo, molecular dynamics, ab initio methods, finite elements, machine learning |
Title: Multi-scale Modeling of Carbon Nanotube-based Artificial Muscles Summary: Artificial muscles [1] are of practical interest with many potential applications in a large number of areas, from medicine to military. However, up to now few types of them have been commercially exploited. Recently, significant new advances in this field have been achieved using carbon nanotube (CNT) yarns [2]. It was possible to create artificial muscles that overcame many of the above-mentioned problems. These structures provide fast, high-force, large-stroke torsional and tensile actuation. These artificial muscles can be electrically, chemically or even simply photonically actuated. This can be realised by simply embedding paraffin-like materials into the yarns. These muscles have very complex structures and there are still many questions that need to be better understood in order to allow the creation of new and more efficient artificial muscles. In this project we intend to investigate the topological and mechanical properties of simplified models for these artificial muscles Objectives:To create structural models representative of the CNT yarns; To investigate the physical origin of the coiled process; To investigate the role of the embedded paraffin in the actuation process. References: 1. J. Foroughi et al., Science 334, 494 (2011). 2. M. D. Lima et al., Science 338, 928 (2012). Researchers involved: Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/), Prof. Paulo Sollero (http://cces.unicamp.br/2018/12/17/paulo-sollero/) Requisites: Experience with Molecular Dynamics, ab initio methods, finite elements and machine learning. |
Title: Structural, Mechanical and Electronic Properties of Carbon-based Porous Structures Summary: With the advent of graphene, there is a renewed interest in other carbon allotrope structures, in particular 3D porous structures. Among these structures, we can mention schwarzites [1] and tubulanes [2]. In this project we intend to study the structural, mechanical and properties of such structures. We also intent to search for similar structures using machine learning tools. References: [1] S. M. Sajadi, P. S. Owuor, S. Schara, C. F. Woellner, V. Rodrigues, R. Vajtai, J. Lou, D. S. Galvao, C. S. Tiwary, P. M. Ajayan, Adv. Mater. 2018, 30, 1704820. [2] R. H. Baughman and D. S. Galvao, Chem. Phys. Lett. 1993, 211, 110. Researchers involved: Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/), Prof. Guido Araujo (https://guidoaraujo.wordpress.com) Requisites: Experience with molecular dynamics, ab initio methods, finite elements and machine learning tools. |
Title: High-fidelity simulations and statistical analysis of wall-bounded turbulent flows Summary: The present research proposal concerns the investigation of wall-bounded turbulent flows using high-fidelity numerical simulations. Capturing the relevant turbulence dynamics in these flows is of paramount importance for the design of several engineering configurations such as wings, wind turbine blades and centrifugal compressors. The main issue in such flows, however, is the presence of multiple spatial and temporal scales which impose difficulties to correctly describe the flow dynamics near the walls. Basically, the turbulence kinetic energy is carried by eddies of different characteristic length scales in layers near and far from the wall. Therefore, it is necessary to use a method which captures the properties of the flows at a broad range of frequencies and spatial scales. We will employ large eddy simulations, LES, for simulating the flows at moderate Reynolds numbers. The LES technique allows the solution of the most energetic scales which are associated with the turbulence production mechanisms. In order to understand the flow physics, statistical techniques such as proper orthogonal decomposition and deep learning will be employed. Researchers involved: William Roberto Wolf, Guido Araújo Requisites: Knowledge in LES, DNS and large-scale parallel computing (MPI) |
> High-performance FPGA accelerators for applications in engineering and sciences
Title: Accelerating Gene Network Simulation in FPGA Summary: Recent advances in Systems Biology pose computational challenges that surpass the capabilities of current computing platforms based on conventional CPUs and GPUs. A particular challenge in this regard is the simulation of the dynamics of Gene Regulatory Networks (GRNs). This is because the number of possible network states, and thus the required computational time, grows exponentially with the number of network components. FPGA-based accelerators appear as promising alternatives to simulations on conventional software platforms. However, their application in the dynamic simulation of GRNs has been hindered by the inaccessibility of this technology to users without the proper expertise. Heterogeneous CPU-FPGA computing platforms, combined with high-level, flexible tools to deploy FPGA implementations, represent a powerful avenue to open the benefits of hardware acceleration to biologists. At CCES, we aim to develop a high-level framework that takes advantage of hardware acceleration for the simulation of GRNs, without compromising flexibility. Researchers involved: Guido Araujo e Marcelo Carrazolle Requisites: The ideal candidate for this position should have a PhD in Computer Science/Engineering, Electrical Engineering or Mechatronics, a good background in Computer Architecture, and strong programming skills in Hardware Description Languages (HDL) like VHDL/Verilog with previous experience in the design of complex FPGA systems. Knowledge on genomics or computational biology is not required but will be considered a plus. Additional information: Please contact guido@ic.unicamp.br |
> Efficient migration of high performance computing science and engineering applications to the cloud
Title: Modeling of structural properties of large molecular assemblies using the cloud Summary: Mass spectrometry is a vital tool for molecular characterization in many areas of chemical analysis and is becoming increasingly important in the field of structural biological chemistry. Ion-Mobility Mass Spectrometry (IM-MS) is an analytical experimental technique capable of separating and identifying ionized molecules moving through a gas-filled drift tube based on their mobility. The adequate (and useful) interpretation of the data outcome of such experiments relies heavily on the computational estimate for the macromolecular Collision Cross Section (CCS). Accurate theoretical CCS calculations for large biomolecules such as proteins and protein aggregates are highly computationally demanding. Scientists at CCES (Guido Araujo and Munir Skaf) have very recently created a new and very efficient code (HPCCS) using HPC methods [1, 2]. Nevertheless, the use of HPCCS for protein assemblies is still limited. Here, we propose a challenging extension of HPCCS using cloud computing to handle much larger biomolecular systems. References: [1] L Zanotto, G Heerdt, PCT Souza, G Araujo, MS Skaf. High Performance Collision Cross Section Calculation—HPCCS. Journal of Computational Chemistry, 2018. [2] H Yviquel, L Cruz and G Araujo. Cluster Programming using the OpenMP Accelerator Model, ACM Trans. on Architectures and Code Optimization, 2018. Researchers involved: Guido Araujo and Munir S. Skaf Requisites: The ideal candidate for this project should have a PhD in Physics or Chemistry or related areas, experience in molecular dynamics simulations, advanced computer programming skills and experience on parallelization models like OpenMP, OpenACC or MPI, using C/C++ on multicore machines. Previous knowledge on GPU programming (e.g. CUDA) or Microsoft Azure/Amazon AWS clouds is a plus. |
Title: Distributed and cloud computing for high-throughput protein modeling Summary: Development of efficient and user-friendly strategies to distribute the execution of protein modeling applications in cloud facilities or distributed environments. Developing a server for high-throughput execution of protein design methods in development in the CCES. References: Ferrari et al. Statistical force-field for structural modeling using chemical cross-linking/mass-spectrometry distance constraints. Bioinformatics, 2019 ([link]); Santos et al. Enhancing protein fold determination by exploring the complementary information of chemical cross-linking and coevolutionary signals. Bioinformatics, 34, 2201, 2018 ([link]) Researchers involved: Prof. Leandro Martínez (http://leandro.iqm.unicamp.br) and Prof. Guido Araújo (https://guidoaraujo.wordpress.com) Requisites: The ideal candidate for this project should have a PhD in Physics or Chemistry, Computer Science or related areas, advanced computer programming skills, system administration, cloud and distributed computing experience. Experience in bioinformatics and computational biology is desirable. |
Title: Efficient allocation of cloud computing resources for deep-learning processing Summary: Machine learning algorithms are algorithms that can learn from and make predictions on data. Recently, deep learning algorithms have been successfully used to improve the state-of-the-art in visual object recognition, speech recognition and object detection [2]. Also, deep learning is becoming more and more used to solve engineering and science problems. For example, Esteva et al. [3] applied deep-learning techniques to classify skin lesions to facilitate the detection of skin cancer, and Araujo et al. [4] applied deep learning to detect features on seismic data and improve the quality seismic imaging.Deep learning models require very large training sets, from dozens of thousands up to several million images, to make accurate predictions. As a consequence, training is a computationally costly process and often requires the use high-performance computing systems.Cloud computing providers are allowing users to rent high-performing computing resources to accelerate their applications on the cloud. Moreover, cloud providers usually offer multiple computing resource options with different cost and performance characteristics. For example, some of Microsoft and Amazon’s virtual machines contain GPUs while other contain fast solid-state drives (SSDs) [1]. Nonetheless, selecting the best option for a given scientific or engineering program may be a challenge itself. For instance, the performance of a reservoir simulation software that does not contain code for GPUs is not improved when it is executed on a machine that contains a GPU; however, the cloud provider charges the user by the time the virtual machine is running, even if the GPU is not being used. Hence, using a virtual machine that does not fit well the application needs may cause the user to be overcharged by a factor of up to 240x, since the cost of a virtual machine may range from 0.01 USD to 24 USD per hour. In this context, it is essential to select the proper computational resources to optimize the cost vs. benefit of the system.The goal of this project is to investigate the cost and benefit of using different computational resources from cloud providers to accelerate deep learning training and inference and to develop automatic methods to optimize cloud resources allocation for deep learning applications. References: [1] Netto, M. A. S., Calheiros, R. N., Rodrigues, E. R., Cunha, R. L. F., Buyya, R. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges. ACM Computing Surveys 51 (2018). [2] LeCun, Y., Bengio, Y. and Hinton, G. Deep learning. 436–444, Nature 521 (2015). [3] Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks, vol. 542, no. 7639, 115-118, Nature 542 (2017). [4] Araujo, L., Oliveira, F. M., Faccipieri, J. H., Avila, S., Tygel, M., Borin, E. Automatic diffraction apex region detection using convolutional neural networks. SEG/SBGf Workshop Machine Learning. Rio de Janeiro, Brasil (2018). Researchers involved: Prof. Dr. Edson Borin (www.ic.unicamp.br/~edson), Prof. Dr. Martin Tygel (http://www.ime.unicamp.br/~tygel/) and other researchers developing/using deep-learning based applications on CCES Requisites: Candidates are expected to have: Experience with Cloud Computing, including deploying and using applications on the Cloud; Experience with scripting languages (e.g. python) and computer system administration (Linux). Experience developing or using deep-learning code on GPU based or distributed computer systems is a plus. |
Title: Algorithms, methods and tools to migrate high-performance computing science and engineering applications to the cloud Summary: Since the 90’s, scientists and engineers use clusters of computers to execute high-performance computing programs. This kind of system is usually expensive, and historically only a few people had access to high-performance computing systems. Nonetheless, advances in virtualization technologies enabled the creation of the cloud computing model, which is enabling anyone to execute programs on modern high performing computers and pay by the use. In this scenario, the user may choose among several different hardware configurations and prices to configure its high-performing cluster of computers. This opens the opportunity for several optimizations, such as avoiding long waits on job queues and creating specialized clusters for each application. However, migrating the code to the cloud, selecting the most cost-effective set of resources for each application and dealing with performance fluctuations on virtual network infrastructures are still challenges that must be tackled [1]. To mitigate the network performance problems, some providers are offering specialized HPC services on the cloud. These services provide guarantees on network performance, nonetheless, they are more expensive and usually limited to a maximum number of machines that can be rent. While some high-performance applications may benefit from these extra guarantees, other are tolerant to network performance variations. Hence, selecting the most cost-effective service for each application may be a challenge itself. High-performance science and engineering applications may depend on multiple software packages, including specialized libraries. However, in many cases, installing and configuring these packages on new systems may be challenging. In this sense, it is crucial to investigate technologies to ease the migration and efficient execution of high-performance programs on the cloud. References: [1] Netto, M. A. S., Calheiros, R. N., Rodrigues, E. R., Cunha, R. L. F., Buyya, R. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges. ACM Computing Surveys 51 (2018). Researchers involved: Prof. Dr. Edson Borin (www.ic.unicamp.br/~edson), Prof. Dr. Martin Tygel (http://www.ime.unicamp.br/~tygel/) and other researchers developing/using deep-learning based applications on CCES Requisites: Experience generating/maintaining parallel code for shared (e.g. OpenMP) and distributed (e.g. MPI) memory computing systems. Experience with Cloud Computing and High-Performance Computing are also desirable. |
> Programming models for High-Performance Cloud Computing
Title: Modeling of structural properties of large molecular assemblies using the cloud Summary: Mass spectrometry is a vital tool for molecular characterization in many areas of chemical analysis and is becoming increasingly important in the field of structural biological chemistry. Ion-Mobility Mass Spectrometry (IM-MS) is an analytical experimental technique capable of separating and identifying ionized molecules moving through a gas-filled drift tube based on their mobility. The adequate (and useful) interpretation of the data outcome of such experiments relies heavily on the computational estimate for the macromolecular Collision Cross Section (CCS). Accurate theoretical CCS calculations for large biomolecules such as proteins and protein aggregates are highly computationally demanding. Scientists at CCES (Guido Araujo and Munir Skaf) have very recently created a new and very efficient code (HPCCS) using HPC methods [1, 2]. Nevertheless, the use of HPCCS for protein assemblies is still limited. Here, we propose a challenging extension of HPCCS using cloud computing to handle much larger biomolecular systems. References: [1] L Zanotto, G Heerdt, PCT Souza, G Araujo, MS Skaf. High Performance Collision Cross Section Calculation—HPCCS. Journal of Computational Chemistry, 2018. [2] H Yviquel, L Cruz and G Araujo. Cluster Programming using the OpenMP Accelerator Model, ACM Trans. on Architectures and Code Optimization, 2018. Researchers involved: Guido Araujo and Munir S. Skaf Requisites: The ideal candidate for this project should have a PhD in Physics or Chemistry or related areas, experience in molecular dynamics simulations, advanced computer programming skills and experience on parallelization models like OpenMP, OpenACC or MPI, using C/C++ on multicore machines. Previous knowledge on GPU programming (e.g. CUDA) or Microsoft Azure/Amazon AWS clouds is a plus. |
Title: Distributed and cloud computing for high-throughput protein modeling Summary: Development of efficient and user-friendly strategies to distribute the execution of protein modeling applications in cloud facilities or distributed environments. Developing a server for high-throughput execution of protein design methods in development in the CCES. References: Ferrari et al. Bioinformatics, 2019 ([link]); Santos et al. Bioinformatics, 34, 2201, 2018 ([link]) Researchers involved: Prof. Leandro Martínez (http://leandro.iqm.unicamp.br) and Prof. Guido Araújo (https://guidoaraujo.wordpress.com) Requisites: The ideal candidate for this project should have a PhD in Physics or Chemistry, Computer Science or related areas, advanced computer programming skills, system administration, cloud and distributed computing experience. Experience in bioinformatics and computational biology is desirable. |
Title: Packmol: parallelization and implementation of new features Summary: Packmol [LINK] is a package to build initial configurations for molecular dynamics simulations [1,2]. Downloaded by tenths of thousands of users and cited by over a thousand scientific articles, it has become a fundamental tool for the study of complex molecular systems. This project aims the improvement of the Packmol package by parallelizing it to multiple CPUs and GPUs, and possibly the implementation of new fatures, as periodic boundary conditions and perhaps a graphical user interface. References: 1. L. Martínez, R. Andrade, E. G. Birgin, J. M. Martínez, Packmol: A package for building initial configurations for molecular dynamics simulations, J. Comp. Chem. 30, 2157, 2009. 2. J. M. Martínez, L. Martínez, Packing Optimization for Automated Generation of Complex System’s Initial Configurations for Molecular Dynamics and Docking, J. Comp. Chem. 24, 819-825, 2003. Researchers involved: Prof. Leandro Martínez (http://leandro.iqm.unicamp.br) and Prof. Guido Araujo (https://guidoaraujo.wordpress.com). Requisites: Advanced computer programming skills with emphasis in high-performance numerical computing and parallel implementation of software in CPUs and graphic processing units. Experience with molecular simulations is a must. |
> Integrated multi-omics analysis for biotechnological applications
Title: Integrated multi-omics analysis and metabolic network simulations applied to Saccharomyces cerevisiae for second generation ethanol production Summary: Developing an integrated multi-omics analysis using transcriptomics, metabolomics and proteomics data from genetically modified yeast for second generation ethanol production. In addition, it perform metabolic network modeling and simulation using a combination of flux balance analysis and machine learning approaches to identify metabolic bottlenecks in the fermentation pathway. Researchers involved: Dr. Marcelo Falsarella Carazzolle (mcarazzo@lge.ibi.unicamp.br) and Prof. Dr. Guido Araujo (guido@ic.unicamp.br) Requisites: Experience in bioinformatics and integrated multi-omics analysis applied to Saccharomyces cerevisiae. Advanced computer programming skills, advanced knowledge of metabolic simulation approaches using Flux Balance Analysis (FBA) and stochastic Petri Net methodologies |
Title: Prospecting cold-adapted enzymes for biomass degradation applied to second generation ethanol Summary: Several efforts have been made globally to replace fossil fuels to the renewable ones. This study is planned to understand the dynamics of lignocellulosic biomass deconstruction in cold environments to search enzymes with optimum activity under moderate temperatures to compose a minimum cocktail designed for Simultaneous Saccharification and Fermentation (SSF) aiming the production of ethanol 2G. The recent revolution in DNA sequencing technologies has produced exponential growth in public DNA databases, specially, for banks of metagenomes generated from the most diverse environments. In parallel, recent developments in high-performance computing, bioinformatics analyzes and molecular dynamics simulations of proteins have made it possible to prospect new genes in these databases, creating a great opportunity to identify new enzymes. In this context, the main goal of this project is prospect cold-adapted enzymes for biomass deconstruction from bacteria and fungi using public metagenomic databases and select targets for wet lab validation using molecular dynamics simulations of proteins. Researchers involved: Dr. Marcelo Falsarella Carazzolle (mcarazzo@lge.ibi.unicamp.br) and Prof. Dr. Munir Skaf (skaf@unicamp.br) Requisites: Experience in bioinformatics analysis applied to metagenomics. Experience in molecular docking and molecular dynamics of proteins. Knowledge in carbohydrate-active enzymes. |
> Prospecting biomass-degrading enzymes using metagenomics and protein modeling approaches
Title: Prospecting cold-adapted enzymes for biomass degradation applied to second generation ethanol Summary: Several efforts have been made globally to replace fossil fuels to the renewable ones. This study is planned to understand the dynamics of lignocellulosic biomass deconstruction in cold environments to search enzymes with optimum activity under moderate temperatures to compose a minimum cocktail designed for Simultaneous Saccharification and Fermentation (SSF) aiming the production of ethanol 2G. The recent revolution in DNA sequencing technologies has produced exponential growth in public DNA databases, specially, for banks of metagenomes generated from the most diverse environments. In parallel, recent developments in high-performance computing, bioinformatics analyzes and molecular dynamics simulations of proteins have made it possible to prospect new genes in these databases, creating a great opportunity to identify new enzymes. In this context, the main goal of this project is prospect cold-adapted enzymes for biomass deconstruction from bacteria and fungi using public metagenomic databases and select targets for wet lab validation using molecular dynamics simulations of proteins. Researchers involved: Dr. Marcelo Falsarella Carazzolle (mcarazzo@lge.ibi.unicamp.br) and Prof. Dr. Munir Skaf (skaf@unicamp.br) Requisites: Experience in bioinformatics analysis applied to metagenomics. Experience in molecular docking and molecular dynamics of proteins. Knowledge in carbohydrate-active enzymes. |
> Modeling of biomolecular and other heterogeneous materials
Title: Distributed and cloud computing for high-throughput protein modeling Summary: Development of efficient and user-friendly strategies to distribute the execution of protein modeling applications in cloud facilities or distributed environments. Developing a server for high-throughput execution of protein design methods in development in the CCES. References: Ferrari et al. Statistical force-field for structural modeling using chemical cross-linking/mass-spectrometry distance constraints. Bioinformatics, 2019 ([link]); Santos et al. Enhancing protein fold determination by exploring the complementary information of chemical cross-linking and coevolutionary signals. Bioinformatics, 34, 2201, 2018 ([link]) Researchers involved: Prof. Leandro Martínez (http://leandro.iqm.unicamp.br) and Prof. Guido Araújo (https://guidoaraujo.wordpress.com) Requisites: The ideal candidate for this project should have a PhD in Physics or Chemistry, Computer Science or related areas, advanced computer programming skills, system administration, cloud and distributed computing experience. Experience in bioinformatics and computational biology is desirable. |
Title: Structural, Electronic and Mechanical Properties of Superwood Summary: Wood is a very important material, which associates low-cost, abundance and has been used in many structural applications for millennia. However, some of its properties, such as strength and toughness are unsatisfactory for some advanced engineering applications. Recently [1], it was demonstrated a new process to transform bulk natural wood into a high-performance structural material. This process involves partial removal of lignin and cellulose, followed by hot-pressing processes. This results in a material (superwood) with enhanced mechanical and other properties. However, the detailed mechanisms related to the enhancements are still not completely understood from atomic size [1,2]. To address this issue is one of the objectives of the present project. References: 1. J. Song et al., Nature 554, 224 (2018) 2. T. Li et al., Sci. Adv. 4, eaar3724 (2018) Researchers involved: Prof. Munir S. Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/) Requisites: Experience with molecular dynamics simulations, ab initio methods, finite elements |
Title: Packmol: parallelization and implementation of new features Summary: Packmol [LINK] is a package to build initial configurations for molecular dynamics simulations [1,2]. Downloaded by tenths of thousands of users and cited by over a thousand scientific articles, it has become a fundamental tool for the study of complex molecular systems. This project aims the improvement of the Packmol package by parallelizing it to multiple CPUs and GPUs, and possibly the implementation of new fatures, as periodic boundary conditions and perhaps a graphical user interface. References: 1. L. Martínez, R. Andrade, E. G. Birgin, J. M. Martínez, Packmol: A package for building initial configurations for molecular dynamics simulations, J. Comp. Chem. 30, 2157, 2009. 2. J. M. Martínez, L. Martínez, Packing Optimization for Automated Generation of Complex System’s Initial Configurations for Molecular Dynamics and Docking, J. Comp. Chem. 24, 819-825, 2003. Researchers involved: Prof. Leandro Martínez (http://leandro.iqm.unicamp.br) and Prof. Guido Araujo (https://guidoaraujo.wordpress.com). Requisites: Advanced computer programming skills with emphasis in high-performance numerical computing and parallel implementation of software in CPUs and graphic processing units. Experience with molecular simulations is a must. |
Title: Mesoscopic architecture of lignocellulosic fibers Summary: The utilization of lignocellulosic biomass is one of the most promising technologies by means of which plant cell wall polysaccharides and polyphenols are transformed into renewable biofuels and aromatic compounds. Plant biomass resistance to thermo-chemical, thermo-mechanical and enzymatic deconstruction processes is largely due not only to the chemical or mechanical nature of its constituents but also due to the complex nanoarchitecture of the amorphous plant cell wall matrix comprised of cellulose, hemicellulose and lignin. Modeling this kind of biosystem using multi-scale and multi-physics approaches is an interesting and relevant challenge.In this project, the successful candidate will apply structural topology optimization techniques [2] based on finite element methods (FEM) and atomistic or particulate descriptions (Molecular Dynamics – MD or Discrete Element Method – DEM) of the different polymeric constituents to study thermal and mechanical properties of lignocellulosic fibers. The main challenge is to propose new evolutionary strategies to identify structural models and to estimate the topology of lignocellulosic materials based on predefined thermal and mechanical properties. References: [1] Pecha M.B., Garcia-Perez M., Foust T.D. and Ciesielski P.N., Estimation of Heat Transfer Coefficients for Biomass Particles by Direct Numerical Simulation Using Microstructured Particle Models in the Laminar Regime. ACS Sustainable Chemistry & Engineering 2017 5 (1), 1046-1053. [2] Ciesielski P.N., Crowley M.F., Nimlos N.R., Sanders A.W., Wiggins G.M., Robichaud D., Donohoe B.S. and Foust T.D., Biomass Particle Models with Realistic Morphology and Resolved Microstructure for Simulations of Intraparticle Transport Phenomena. Energy & Fuels 2015, 29, 242-254. [3] Vicente W.M., Zuo Z.H., Pavanello R., Calixto T.K.L., Picelli R., Xie Y.M., Concurrent topology optimization for minimizing frequency responses of two-level hierarchical structures, Computer Methods in Applied Mechanics and Engineering, Volume 301, pp. 116-136, 2016. Researchers involved: Munir Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Euclides de Mesquita Neto, and Renato Pavanello (http://cces.unicamp.br/2018/10/15/renato-pavanello/) Requisites: Applicants must have a PhD degree in one of the following areas: Engineering, Physics, Chemistry or related fields. The candidate must show proof of advanced computer programming skills and experience on computer modeling such as finite element methods and COMSOL or ANSYS Multiphysics package. Experience with molecular dynamics simulations is a plus. |
Title: In silico analysis of inhibitors against enzyme Oxidase Alternative (AOX): a new fungicide candidate against Moniliophthora perniciosa and other phytopathogens of Brazilian agriculture Summary: Cacao witches’ broom, caused by the basidiomycete fungus Moniliophthora perniciosa, has devastated Brazilian cacao plantations since 1990s with enormous economic and social impacts. Resistant to conventional fungicides, the fungus uses mitochondrial enzyme Oxidase Alternative (AOX) as an escape mechanism to fungicides. In this context, several molecules have been synthesized and tested against AOX, using in vitro and in vivo approaches, resulting in very promising candidates for further fungicides. In silico analysis using virtual screening, molecular docking and molecular dynamics approaches are essential steps to guide the next rounds of molecular synthesis and experimental validation. Researchers involved: Dr. Marcelo Falsarella Carazzolle (mcarazzo@lge.ibi.unicamp.br) and Prof. Dr. Munir Skaf (skaf@unicamp.br) Requisites: Experience in virtual screening, molecular docking and molecular dynamics. Knowledge in phytopathology and plant-host interaction, biochemistry, enzymology and characterization of protein-ligand interaction |
Title: Multiscale Modeling of Dynamic Failure in 2D Polycrystalline Materials using Boundary Element Methods (BEM) and Atomic-scale Finite Elemet Methods (AFEM) Summary: This research work models the mechanical and fracture behavior of policrystalline materials at the meso- and nano scales. At the mesoscale the behavior of anisotropic 2D crystal aggregates is described by the Boundary Element Method (BEM) using a multi-domain approach [1]. The nano scale behavior of intercrystalline space are modelled by a Atomic-scale Finite Element Method [2] using the Finnis-Sinclair (FS) potential [3]. The bridge from nano-to-meso scale is obtained by a FEM multi-scale methodology [4] which will lead to a BEM-AFEM coupling procedure [4,5]. References: [1] Alvarez J.E., Galvis, A.F., and Sollero, P.; Multiscale dynamic transition of 2D metallic materials using the boundary element method. Computational Materials Science. v. 155, p. 383-392, 2018. https://doi.org/10.1016/j.commatsci.2018.09.002 [2] Damasceno, D.A., Mesquita, E., Rajapakse, R.N.K.D., Pavanello, R.; Atomic-scale finite element modelling of mechanical behaviour of graphene nanoribbons. International Journal of Mechanics and Materials in Design. p. 1-13, 2018. https://doi.org/10.1007/s10999-018-9403-z [3] Dai, D.X., Kong, Y., Li, J.H., and Liu, B.X.; Extended Finnis-Sinclair potential for bcc and fcc metals and alloys. Journal of Physics: Condensed Matter. v. 18(19), p. 4527-4542, 2006. https://doi.org/10.1088/0953-8984/18/19/008 [4] Efendiev Y., and Thomas, Y. H.; Multiscale Finite Element Methods: Theory and Applications., Springer. 2008. [5] Mesquita, E., Pontes Jr., B.R., and Sousa., E.A.C; Coupling of finite element and boundary element procedures for steady state elastodynamics. Part. I: Formulation. Revista Brasileira de Ciências Mecânicas, RBCM, v. XVI, nº 2, p. 143-158, 1994. Researchers involved: Prof. Paulo Sollero, Prof. Euclides Mesquita, Prof. Renato Pavanello, Prof Munir Skaf http://cces.unicamp.br/faculty/ Requisites: The ideal candidate for this project should have a PhD in Mechanical Engineering of Physics, Computer Science or related áreas. Should have experience in numerical methods to model continuum and molecular scale problems, as well as experience in scientific programming languages. |
> Multiscale modeling of carbon nanomaterials and metal organic frameworks – MOFs
Title: Structural, Electronic and Mechanical Properties of Superwood Summary: Wood is a very important material, which associates low-cost, abundance and has been used in many structural applications for millennia. However, some of its properties, such as strength and toughness are unsatisfactory for some advanced engineering applications. Recently [1], it was demonstrated a new process to transform bulk natural wood into a high-performance structural material. This process involves partial removal of lignin and cellulose, followed by hot-pressing processes. This results in a material (superwood) with enhanced mechanical and other properties. However, the detailed mechanisms related to the enhancements are still not completely understood from atomic size [1,2]. To address this issue is one of the objectives of the present project. References: 1. J. Song et al., Nature 554, 224 (2018) 2. T. Li et al., Sci. Adv. 4, eaar3724 (2018) Researchers involved: Prof. Munir S. Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/) Requisites: Experience with molecular dynamics simulations, ab initio methods, finite elements |
Title: Multi-scale Modeling of Hydrogen Storage in Metal-Organic Frameworks (MOFs) Summary: Nanomaterials represent a new and unique opportunity in materials science today. One example of where nanomaterials could be very important in basic science and technological applications are in the field of hydrogen storage. Among several promising nanostructures for hydrogen storage purposes, metal-organic frameworks (MOFs) stand out [1]. One important characteristic of these structures is they are in general very porous, making them ideal candidates to storage atoms and/or molecules. Due to the high reactivity associated with the organic groups, an almost infinite number of these structures can be synthesised. This makes the computational screening extremely valuable. The recent advances in modelling these systems [2] show that computer modelling can be very effective in designing and selecting candidate structures to hydrogen storage. In recent years a significant new number of new MOF structures have been proposed and/or synthesised [1], creating a renewed interest for these structures. Objetives: To investigate the possibility of hydrogen storage in MOF structures; To investigate the performance of different structures; To use of machine learning to screen good candidate structures; To combine different techniques to address aspects at different scales (ab initio for small structures, monte carlo and molecular dynamics for large structures and macroscale models with finite elements methods. References: 1. J. W. Colson, Science 332, 228 (2011). 2. J. R. Holst, A. Trewin and A. I. Cooper, Nature Chem. 2, 915 (2010). Researchers involved: Prof. Munir S. Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/) Requisites: Experience with monte carlo, molecular dynamics, ab initio methods, finite elements, machine learning |
Title: Multi-scale Modeling of Carbon Nanotube-based Artificial Muscles Summary: Artificial muscles [1] are of practical interest with many potential applications in a large number of areas, from medicine to military. However, up to now few types of them have been commercially exploited. Recently, significant new advances in this field have been achieved using carbon nanotube (CNT) yarns [2]. It was possible to create artificial muscles that overcame many of the above-mentioned problems. These structures provide fast, high-force, large-stroke torsional and tensile actuation. These artificial muscles can be electrically, chemically or even simply photonically actuated. This can be realised by simply embedding paraffin-like materials into the yarns. These muscles have very complex structures and there are still many questions that need to be better understood in order to allow the creation of new and more efficient artificial muscles. In this project we intend to investigate the topological and mechanical properties of simplified models for these artificial muscles Objectives:To create structural models representative of the CNT yarns; To investigate the physical origin of the coiled process; To investigate the role of the embedded paraffin in the actuation process. References: 1. J. Foroughi et al., Science 334, 494 (2011). 2. M. D. Lima et al., Science 338, 928 (2012). Researchers involved: Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/), Prof. Paulo Sollero (http://cces.unicamp.br/2018/12/17/paulo-sollero/) Requisites: Experience with Molecular Dynamics, ab initio methods, finite elements and machine learning. |
Title: Mesoscopic architecture of lignocellulosic fibers Summary: The utilization of lignocellulosic biomass is one of the most promising technologies by means of which plant cell wall polysaccharides and polyphenols are transformed into renewable biofuels and aromatic compounds. Plant biomass resistance to thermo-chemical, thermo-mechanical and enzymatic deconstruction processes is largely due not only to the chemical or mechanical nature of its constituents but also due to the complex nanoarchitecture of the amorphous plant cell wall matrix comprised of cellulose, hemicellulose and lignin. Modeling this kind of biosystem using multi-scale and multi-physics approaches is an interesting and relevant challenge.In this project, the successful candidate will apply structural topology optimization techniques [2] based on finite element methods (FEM) and atomistic or particulate descriptions (Molecular Dynamics – MD or Discrete Element Method – DEM) of the different polymeric constituents to study thermal and mechanical properties of lignocellulosic fibers. The main challenge is to propose new evolutionary strategies to identify structural models and to estimate the topology of lignocellulosic materials based on predefined thermal and mechanical properties. References: [1] Pecha M.B., Garcia-Perez M., Foust T.D. and Ciesielski P.N., Estimation of Heat Transfer Coefficients for Biomass Particles by Direct Numerical Simulation Using Microstructured Particle Models in the Laminar Regime. ACS Sustainable Chemistry & Engineering 2017 5 (1), 1046-1053. [2] Ciesielski P.N., Crowley M.F., Nimlos N.R., Sanders A.W., Wiggins G.M., Robichaud D., Donohoe B.S. and Foust T.D., Biomass Particle Models with Realistic Morphology and Resolved Microstructure for Simulations of Intraparticle Transport Phenomena. Energy & Fuels 2015, 29, 242-254. [3] Vicente W.M., Zuo Z.H., Pavanello R., Calixto T.K.L., Picelli R., Xie Y.M., Concurrent topology optimization for minimizing frequency responses of two-level hierarchical structures, Computer Methods in Applied Mechanics and Engineering, Volume 301, pp. 116-136, 2016. Researchers involved: Munir Skaf (http://cces.unicamp.br/2018/10/13/munir-skaf/), Euclides de Mesquita Neto, and Renato Pavanello (http://cces.unicamp.br/2018/10/15/renato-pavanello/) Requisites: Applicants must have a PhD degree in one of the following areas: Engineering, Physics, Chemistry or related fields. The candidate must show proof of advanced computer programming skills and experience on computer modeling such as finite element methods and COMSOL or ANSYS Multiphysics package. Experience with molecular dynamics simulations is a plus. |
Title: Structural, Mechanical and Electronic Properties of Carbon-based Porous Structures Summary: With the advent of graphene, there is a renewed interest in other carbon allotrope structures, in particular 3D porous structures. Among these structures, we can mention schwarzites [1] and tubulanes [2]. In this project we intend to study the structural, mechanical and properties of such structures. We also intent to search for similar structures using machine learning tools. References: [1] S. M. Sajadi, P. S. Owuor, S. Schara, C. F. Woellner, V. Rodrigues, R. Vajtai, J. Lou, D. S. Galvao, C. S. Tiwary, P. M. Ajayan, Adv. Mater. 2018, 30, 1704820. [2] R. H. Baughman and D. S. Galvao, Chem. Phys. Lett. 1993, 211, 110. Researchers involved: Prof. Douglas S. Galvao (https://sites.ifi.unicamp.br/galvao/en/), Prof. Guido Araujo (https://guidoaraujo.wordpress.com) Requisites: Experience with molecular dynamics, ab initio methods, finite elements and machine learning tools. |
> Large-scale computing and statistical analysis of unsteady flows involving transition and turbulence
Title: High-fidelity simulations and statistical analysis of wall-bounded turbulent flows Summary: The present research proposal concerns the investigation of wall-bounded turbulent flows using high-fidelity numerical simulations. Capturing the relevant turbulence dynamics in these flows is of paramount importance for the design of several engineering configurations such as wings, wind turbine blades and centrifugal compressors. The main issue in such flows, however, is the presence of multiple spatial and temporal scales which impose difficulties to correctly describe the flow dynamics near the walls. Basically, the turbulence kinetic energy is carried by eddies of different characteristic length scales in layers near and far from the wall. Therefore, it is necessary to use a method which captures the properties of the flows at a broad range of frequencies and spatial scales. We will employ large eddy simulations, LES, for simulating the flows at moderate Reynolds numbers. The LES technique allows the solution of the most energetic scales which are associated with the turbulence production mechanisms. In order to understand the flow physics, statistical techniques such as proper orthogonal decomposition and deep learning will be employed. Researchers involved: William Roberto Wolf, Guido Araújo Requisites: Knowledge in LES, DNS and large-scale parallel computing (MPI) |
> Large-scale computing in science and engineering
Title: Modelling of the Dynamic Response of Large Foundations of Synchrotron Nano Facilities Including Soil-Structure Interaction Effects Summary: Brazil is currently building the Sirius Synchrotron light source [1] and the proper behavior of the lightline is strongly dependent, among others, on the vibration levels of the structural foundation . The actual foundation is comprised of a concret mat supported by more than one thousend of piles. The modelling of the foundation dynamic behavior subjected to impinging wave fields and external force excitation is crucial to predict and correct undesired vibrational effects. This project encompasses the modelling the interacion of large pile groups supported by a given soil profile, using a coupling of BEM (Boundary Element Method) and FEM (Finite Element Methods) methods [2,3]. The description of the large coupled system requires the use of substructures, coupled directly or iteratively nos distributed clusters with parallel processsing capabilities [4,5]. References: [1] Sirius. “Projeto Sirius: Nova Fonte de Luz Síncrotron Brasileira. CNPEM, Centro Nacional de Pesquisas em Energia e Materiasi, MCT”. In: http://lnls.cnpem.br/sirius/, 2010. [2] Taherzadeh, Clouteau and Cottereau, “Simple formulas for the dynamic stiffness of pile groups”. Earthquake Engineering and Structural Dynamics. 38(15), 1665-1685, 2009. [3] Barros, P.L.A, Labaki, J., Mesquita, E.; IBEM-FEM modelo f a piled plate within a transversely isotropic half-space. Engeneering Analysis with Boundary Elements, v.101, April 2019, pp.281-296. [4] Labaki, J ; FERREIRA, L. O. S. ; MESQUITA, E. . Constant Boundary Elements on Graphics Hardware: A GPU-CPU Complementary Implementation. Journal of the Brazilian Society of Mechanical Sciences and Engineering (Print), v. XXXIII, p. 475-482, 2011 [5] YVIQUEL, H. ; CRUZ, L. ;ARAÚJO G.. Cluster Programming using the OpenMP Accelerator Model. ACM Transactions on Architecture and Code Optimization, v. 15, p. 1-23, 2018. Researchers involved: Euclides Mesquita, Guido Araujo, Renato Pavanello, Nimalsiri Rajapakse Requisites: The ideal candidate for this project should have a PhD in Mechanical Engineering, Physics, Computer Science or related areas. Should have experience in numerical methods to model continuum and molecular scale problems, as well as experience in scientific programming languages. |
Title: Modelling of Multi-Layered graphene sheets by using the Atomic-scale Finite Element Method Summary: The idea of the present project is to extend the formulation and application of the Atomic-scale Finite Element Methdo AFEM [1, 2] to modell the mechanical behavior of multi-layered graphene sheets. AIREBO [3] and ReaxFF [4] interatomic potencials shall be implemented within the AFEM realm. Mechanical behavior of the layered graphene including the description of fracture strength at atomic level and bucking behavior is to be studied. The analysis of large atomic ensembles require the use of parellel implementation of the methodology in computer clusters and graphic cards [5]. References: [1] Liu, B.; Huang, Y.; Jiang, H.; Qu, S.; Hwang, K. C. The atomic-scale finite element method. Comp. Meth. Appl. Mech. Engrg, v. 193, n. 17, p. 1849-1864, 2004. [2] Damasceno, D.A., Mesquita, E., Rajapakse, R.N.K.D., Pavanello, R.; Atomic-scale finite element modelling of mechanical behaviour of graphene nanoribbons, International Journal of Mechanics and Materials in Design. pp 1-13. [3] Stuart, Steven J.; Tutein, Alan B.; Harrison, Judith A. A reactive potential for hydrocarbons with intermolecular interactions. J. Chem. Phys., v. 112, n. 14, p. 6472 -6486, 2000. [4] van Duin, A.C.T., Dasgupta, S., Lorant, F., Goddard, W.A., 2001. ReaxFF: A Reactive Force Field for Hydrocarbons. J. Phys. Chem. A 105, 9396–9409. https://doi.org/10.1021/jp004368u [5] YVIQUEL, H. ; CRUZ, L. ;ARAÚJO G.. Cluster Programming using the OpenMP Accelerator Model. ACM Transactions on Architecture and Code Optimization, v. 15, p. 1-23, 2018. Researchers involved: Euclides Mesquita, Guido Araújo, Renato Pavanello Requisites: The ideal candidate for this project should have a PhD in Mechanical Engineering, Physics, Computer Science or related areas. Should have experience in numerical methods to model continuum and molecular scale problems, as well as experience in scientific programming languages. |
The selected candidate is expected to work in an interdisciplinary environment in which the problems and techniques of the research areas may be interconnected. The successful candidate must have a strong background in any computational science and engineering research field, such as: computer science, molecular modeling, computational materials sciences, theoretical chemistry, bioinformatics, statistics, parallel programming, modeling of complex engineering problems, hardware-based acceleration, machine-learning, etc.