AI

Dihuni ships GPU Servers for Generative AI and LLM Applications

Dihuni, a leading artificial intelligence (AI), data center and Internet of Things (IoT) solutions company today announced it has started shipping new OptiReady GPU servers and workstations designed for Generative AI and LLM applications. These pre-configured systems are designed to make Generative AI infrastructure selection simple and accelerate deployment – from procurement to running applications.

Dihuni has enabled a new suite of new GPU servers with an online configurator for customers to easily select GPU, CPU and other configuration options. These GPU servers can be preloaded with operating system and AI packages including Pytorch, TensorFlow, Keras etc. Servers can be purchased stand-alone or for larger deployments such as LLM and Generative AI, Dihuni offers fully racked and cabled pods of high-performance GPU clusters.

“New Generative AI applications require extreme performance GPU systems. We’re using our years of expertise, technologies, partnerships and supply-chain to help Generative AI software companies accelerate their application development,” said Pranay Prakash, chief executive officer at Dihuni. “We have been helping customers in multiple verticals with their GPU server requirements and offer choice and flexibility from a system architecture and software standpoint to ensure we are delivering systems optimized for Generative AI applications.”

The complete line of new Generative AI accelerated GPU servers allows flexibility for students, researchers, scientists, architects and designers to select systems that can be sized correctly and optimized for their AI and HPC applications. More info on the new servers featuring the latest and greatest GPUs can be found by visiting here.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *