High Performance Cloud Computing Centre
Overview
Cloud computing is defined as on demand offering of computing resources (infrastructure, platform and software) as a service over the internet using virtualisation technology. An advantage of cloud computing is that the computing resources can be allocated / deallocated automatically depending on the current demand of these resources; taking resources from idle users / software and giving it to active users / software thus optimising resource utilisation. High Performance Computing is the use of parallel processing and distributed computing to achieve higher computing power for running advanced and complex programmes or simulations efficiently, reliably and quickly. Hence, High Performance Cloud Computing is the process of high computation on cloud to gain advantages of the cloud computing.
Vision
A globally prominent centre in research and innovation of high performance cloud technologies for scientific and engineering computation.
Mission
- To solve complex optimisation problem in scientific computing and engineering simulation
- To undertake research in maximising computational performance while minimising cost
- To proactively undertake research in policy and compliance
Application
Why should we change from normal computer to HPC?
- COST SAVING
- Low entry cost as Cloud Computing enables on-demand and scalability of resources. No more expensive cost of purchasing and maintaining of infrastructure.
- LESS COMPLEX
- HPC is specifically designed to compute complex simulations and computational problems. It is too much for normal computer to compute.
- FASTER
- By using parallel and distribute computing, time to compute has increased by a huge margin. Research, production and marketing of the final product will be much faster and early of ROI.
- DATA PROCESSING MADE EASIER
- Big and large data set requires a lot of small data to be processed simultaneously. This is where HPC shines!
Services
The services that we provide as a Centre:
- INFRASTRUCTURE-AS-A-SERVICE (IaaS)
- Storage Capacity: 117 TB ≈ 117000000000000 bytes
- Performance : total of 450-cores and >1TB of Memory
- ENGINEERING SIMULATION
- Ansys, Wolfram Mathematica, Matlab, Ls-Dyna
- TRAINING
- Linux essentials for cloud
- Building your own cloud using Open Nebula
- Image Processing in R
- Deep Learning and CUDA programming
- Parallel System Development using MPI
- Big Data Analysis using Hadoop
- Cloud Architecture - reverse engineering
- CONSULTANCY
- Building Cluster
- Software Migration to Cloud
Research Projects
- Chaos-based Simultaneous Compression and Encryption for Hadoop
- Cloud Computing Platform
- Migration of Metocean System in Mattlab to Cloud Environment
- Big Data Transfer
- Real-time and Green scheduling for cloud environment
- Development framework for Seismic Imaging Application
- Trust Criteria Analysis for IaaS
- Forecasting Algorithm Using Deep Learning
Members
- Assoc. Prof Dr Ahmad Kamil B Mahmood
- Assoc. Prof Dr Low Tan Jung
- Assoc. Prof Dr Vishweshwar Prabhappa Kallimani
- Assoc. Prof Dr M.Soperi b M. Zahid
- Dr Lukman B A Rahim
- Dr Mohamed Nordin B Zakaria
- Dr Yong Suet Peng @ Vivian
- Dr Hitham Seddig Alhassan Alhussian
- Dr Tuan Mohd Yussof
- Dr Izzatdin Abdul Aziz
- Dr Noreen Izza Arshad
- Dr Norshakirah bt A. Aziz