The next ACAT will be held in Saas-Fee, one of the greatest ski resort in Switzerland.
From March 11 to 15, 2019, you are invited to attend and contribute to the mornings and evenings sessions of the workshop. The afternoons will be free for informal discussions or for the ski and mountain lovers.
ACAT 2019 is foreseen to be a landmark in the series as we stand at a dramatic moment in the history of computing and physics research. At a time where the advances in AI, deep learning (DL), quantum computing (QC), high performance computing (HPC) as well as dedicated chips (GPU, TPU, neuromorphic), separately as well as in combination, will change drastically the way physics research is done.
The ACAT 2019 Moto is underlying the trend we all see today:
Empowering the Revolution:
Bringing Machine Learning to High Performance Computing.
Data analysis, event reconstructions, accelerator simulations, theoretical studies are starting to incorporate the most recent advances on deep learning. The efficiency and accuracy of this approach is directly related to the quality and amount of training the system acquires. This can be provided by intertwining DL with HPC. Not only using the large supercomputers competing for the exascale supremacy, but also the added power provided by accelerator chips like the FPGA’s, GPU’s, TPU’s (Tensor Processor Unit) or even, in a more distant future, the quantum accelerators. Although high energy physics has been standing away from supercomputers in the past as most of the simulation or event reconstruction could be efficiently performed on computer farms or the GRID, it is important today to revisit this position in the light of the development of AI and DL.
But the ACAT 2019 program will not be limited to this aspect. Many other DL issues are on the table, like how to extract scientific meaning from the DL analysis? Namely how to learn physics from machine learning? or said differently, how to extract new scientific information from the internal weights of the neural net nodes, the so-called black box syndrome?
Others issues include the estimation of the systematic errors or biases introduced by the training stage, the risk of overfitting as well as hyperparameter optimization.
The workshop will also address in a dedicated session, the recent and rapid developments observed in quantum computers. thanks to the involvement of many universities and companies like D-Wave, Google, Microsoft or IBM. Not only QC could provide high speed searcher, classifier or minimizer that are needed for event reconstruction and data analysis, but it open a new dimension in theoretical physics in the sense first proposed by R. Feynman back in 1981 in its famous presentation “Simulating physics with computers“. Often called simulator, these quantum systems can be made to simulate other quantum systems untractable by classical computers.
Are quantum computers either based on annealing or Qubit gates developed today suitable for solving lattice QCD calculations to a level that classical supercomputers cannot undertake? Currently limited to low energy systems, could we imaging some application for HEP ?
Certainly the solution will not come easy in the near future, but we have to address these issues would quantum computers scale to operational size.
Focusing on DL and QC does not mean that all other topics which have made the bread and butter of the previous ACAT workshop will be ignored. On the contrary, the boost coming from these more recent topics will benefit to all including the more traditional topics listed here.
+ All the regular ACAT topics
Track 1: Computing Technology for Physics Research
- Languages, Software quality, IDE and User Interfaces
- Languages (new C++ standard, Java, …), language interoperability, code portability
- Software quality assurance; code reflection; documentation, performance and debugging tools
- Computer system Benchmarking, beyond Linpack
- IDE and frameworks
- User Interfaces, Common Libraries.
- Distributed and Parallel Computing
- Multilevel parallelism
- Distributed computing
- GRID and Cloud computing
- New architectures
- Massive Multicore
- High Performance Computing
- Accelerator-based computing (GPGPU’s, FPGA’s)
- High and low precision floating-point (quad/octuple precision and short float for CUDA)
- Containerization (shifter, remote scripting)
- Hardware abstraction
- New TCP control and routing mechanism
- Alternative to ethernet
- Online computing
- Advanced Monitoring, Diagnostics and Control
- Scalable distributed data collectors
- High Level Triggering (HLT)
- Stream event processing & High Throughput Computing (HTC)
Track 2: Data Analysis – Algorithms and Tools
- Machine Learning
- Neural Networks and Other Pattern Recognition Techniques
- Evolutionary and Genetic Algorithms
- Package Benchmarking
- Automation of Science: Data to formula
- Advanced Data Analysis Environments
- Statistical Methods, Multivariate analysis
- Data mining
- Simulation, Reconstruction and Visualization Techniques
- New algorithms for finding tracks, or other objects.
- Detector and Accelerator Simulations, MC and fast MC
- Visualization Techniques; event displays
- Advanced Computing
- Quantum Computing
- Bio Computing: life process simulation, brain simulation, quantum biology
Track 3: Computations in Theoretical Physics: Techniques and Methods
- Automatic Systems
- Automatic Computation Systems: from Amplitudes to Event Generators
- Multi-dimensional Integration: Methods and Tools
- Intensive High Precision Numerical Computations: Algorithms and Systems
- Higher Orders
- Matching NLO and NNLO Calculations to Event Generators
- Multi-loop Calculations and Higher Order Corrections
- Computer Algebra Techniques and Applications
- Computational Physics: Theoretical and Simulation Aspects
- Lattice QCD
- Cosmology, Universe Large Scale Structure, Gravitational Waves
- Nuclear Physics N-body Computation
- Plasma Physics
- Earth Physics, Climate, Earthquakes