2. The differences in inference capabilities between Velox and TensorFloww – SPOTLIGHT PAPER
3. MidLLaMAI: Balancing Quantization with Pruning while compressing LLaMA 2 – SPOTLIGHT PAPER
4. Comparison & study of Distributed Deep Learning Training Techniques
5. Distributed Model Training With Dynamic Gradient Compression – SPOTLIGHT PAPER
7. Benchmarking Data Parallel vs Model Parallel Training with PyTorch and VeloxML
9. Comparative Analysis of Standard Image Classification Model Training Techniques with FSDP
10. Comparative Analysis of Image Classification Performance: PyTorch and TensorFlow
11. Comparison of ML Workloads in Velox, Tensorflow and Pytorch
14. Comparative Study of TensorFlow and PyTorch on Single and Distributed Systems
15. Comparative Analysis of Forward Grad and Backpropogation
16. Benchmarking image classification models using parallelisation techniques
17. A Comparative Analysis of Deep Learning Frameworks for Music Recommendation Systems
18. Accelerating Decision Tree Training on the HIGGS Dataset
20. A Comparison of Different Deep Learning Frameworks in a Standalone and Distributed Sense
21. Implementation of Model Compression Techniques on Deep Neural Networks
22. Data Intensive Systems for Machine Learning
23.Performance Analysis of Distributed Training Frameworks
24. A Comparative Analysis of Distributed Training Strategies
25. Comparison of Cardinality Estimation Techniques Utilizing Machine Learning
27.VeRA: VectorDB-based Retrieval Augmentation – SPOTLIGHT PAPER
28. CSE 598: Data-Intensive System for Machine Learning
29. Privacy-Preserving Log Analysis for Machine Learning Applications
30. Comparative analysis of Velox, PyTorch, and TensorFlow on ResNet workload
31. Comparison of Deep -Q Network for Reinforcement Learning using Multi-stage Frameworks
32. Benchmarking Large Language Model Inference – SPOTLIGHT PAPER
35. LLM Performance Optimized by DeepSpeed – SPOTLIGHT PAPER