Making deep learning models
Lean Efficient Faster Affordable Sustainable

Unleash the true potential of your business through our lean, efficient, and super-fast deep learning models. Whether it's optimizing performance, or accelerating inference speeds, we've got you covered.

Our Research

Surgical Feature-Space Decomposition of LLMs: Why, When and How, ACL 2024

~Arnav Chavan, Nahush Lele, Deepak Gupta

Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward, IJCAI 2024

~Arnav Chavan, Raghav Magazine, Shubham Kushwaha, Mérouane Debbah, Deepak Gupta

Growing Vision Transformers with Heterogenous scaling for improved efficiency and performance, ICLR (Tiny Track) 2024

~Akash Guna RT, Arnav Chavan, Deepak Gupta

⁠Rethinking Compression with Reduced Order Modelling for lighter LLMs, ICLR (Tiny Track) 2024

~Arnav Chavan, Nahush Lele, Deepak Gupta

A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models, ACL 2023

~Krithika Ramesh, Arnav Chavan, Shrey Pandit, Sunayana Sitaram

One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning

~Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, Zhiqiang Shen

⁠Training CNNs on large images on single GPU, once impossible, with patchGD, WANT@NeurIPS 2023

~Deepak K Gupta, Gowreesh Mago, Arnav Chavan, Dilip K Prasad

⁠Designing light-weight CNN and transformer based object trackers through network pruning, ICASSP 2023

~Saksham Aggarwal, Taneesh Gupta, Pawan Kumar Sahu, Arnav Chavan, Rishabh Tiwari, Dilip K Prasad, Deepak K Gupta

⁠Benchmarking model training and inference for resource constrained deep learning, RCV@ICCV 2023

Budget aware binarization of CNNs using MixBin, WACV 2023

~Udbhav Bamba, Neeraj Anand, Dilip K Prasad, Deepak K Gupta

⁠Improving the generalization ability & compute efficiency of Meta learning with MetaDOCK, CVPR 2022

~Arnav Chavan, Rishabh Tiwari, Udbhav Bamba, Deepak K Gupta

⁠Slimming vision transformers without performance degradation using ViT-Slimming, CVPR 2022

~Arnav Chavan, Zhiqiang Shen, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric P Xing

Developed the state-of-the-art (SOTA) model pruning method, called ChipNet, ICLR 2021

~Rishabh Tiwari, Udbhav Bamba, Arnav Chavan, Deepak K Gupta

⁠Rescaling CNNs through Learnable Repetitions of Network Parameters, ICIP 2021

~Arnav Chavan, Udbhav Bamba, Rishabh Tiwari, Deepak Gupta




Benchmarks

Loading...

Save upto 80% on
the gpu computing cost 

As DL models become increasingly demanding, we're experts in optimizing them for efficiency, resulting in significant cost savings while maintaining peak performance.



Enable powerful DL
models on small devices 

We've mastered the art of making Generative AI models lean and efficient, enabling deployment on mobile devices and bringing the magic of AI to a broader audience.



Get up to 6x inference
speeds on same devices 

Our deep learning models are engineered for unrivaled inference speeds, enhancing user experiences and enabling seamless follow-up tasks, particularly incritical applications like autonomous driving and object detection.

Secure your data by
switching to edge computing 

Data privacy and security are non-negotiable. Nyun AI ensures the utmost protection by deploying deep learning models on edge devices, eliminating the need for streaming data on the cloud and ensuring private and secure networks.

Nyun Zero is a No-code, Super user-friendly, 360-degree efficiency AI-driven suite.

Nyun Zero provides a seamless and hassle-free entry point to the world of Artificial Intelligence, whether you're a seasoned pro or just starting out. It encompasses multiple frameworks for deep learning models and is flexible in terms of deployment on multiple devices. You can swiftly implement AI solutions within your organization, empowering your team to leverage the full potential of AI in solving complex problems and driving innovation.

Unlock the full potential of Generative models

  • Tailored & Efficient LLM Fine-Tuning, Affordable Rates
  • Easy Data Upload While Preserving Privacy
  • Efficient Deployment with Nyun Zero

Make your models ultra efficient

  • Use multiple model compression techniques to make the models fast and efficient.
  • Get compute-efficient & fast DL models
  • Easily deployable through Nyun Zero platform

All relevant KPIs at your fingertips

  • Tracker shows real-time metrics which are relevant for a model.
  • Super easy to deploy.




Safeguarding your data privacy

At Nyun AI, we deeply value the significance of your data and prioritize its privacy and security, which are paramount for your organization. With this in mind, we take rigorous measures to ensure that your data is masked at every point, safeguarding it from any potential vulnerabilities. Moreover, we offer the flexibility of deploying our tools on user servers, granting you complete control and ownership of your data. Rest assured, your data remains solely with you, reinforcing our commitment to maintaining its confidentiality and protection.


Ready to Embark on your AI journey?

Unveil the possibilities of streamlined, efficient, and lightning-fast deep learning models with Nyun AI. Together, let's shape a sustainable and potent AI future. Reach out to our team today to delve into how our solutions can revolutionize your business.

Supported By

Texmin
Tenity
Microsoft for Startups
Google Cloud for Startups
AWS Activate
Nvidia Inception