AI

UMD Smith launches Large Language Model (LLM) Training Workshop

Enrollment Open for Initiative Tailored for Professionals in Public-Facing and Commercial Sectors

Data scientists and engineers, especially in government and healthcare sectors, can train for and individually develop a large language model (LLM), like ChatGPT, in a forthcoming workshop from the Office of Executive Education at the University of Maryland’s Robert H. Smith School of Business.

Enrollment is open for “Large Language Model Training and Development,” a topic of training new to the executive education area and Greater Washington. It will run in person at UMD’s College Park campus, 9 a.m. to 2:30 p.m. on FridaysAugust 2-23, 2024.

“The workshop represents a deep dive into the technical framework behind the large language model and fits with the Smith School’s innovative approach to AI,” says instructor and Associate Professor of Information Systems Kunpeng Zhang.

He lists the benefits of building LLMs for prospective participants:

  • Industry-specific accuracy: Developing AI models that understand complex terminology and regulations unique to an industry helps ensure precise and reliable results.
  • Data security and compliance: This keeps sensitive information confidential and compliant with regulations by training models on internal data.
  • Operational Efficiency: Streamlining processes and enhancing decision-making with custom AI solutions meets the specific needs of a given environment.

“This course is ideal for those familiar with Python and will challenge participants to use their knowledge in real-time coding exercises,” Zhang says. “It is especially useful for data scientists and data engineers in secure data environments whose use of public-facing AI tools is restricted and who are interested in creating their own. These solutions can be implemented in any commercial environment without the need for extensive computing resources.”

In addition to deploying a customized LLM for an organization, completion will enable participants to describe the foundations of LLMs and transformers, implement a transformer-based LLM and leverage a pre-trained and fine-tuned paradigm in text understanding.

Previous ArticleNext Article