Course Outline

Introduction to Ollama

  • What is Ollama and how does it work?
  • Benefits of running AI models locally
  • Overview of supported LLMs (Llama, DeepSeek, Mistral, etc.)

Installing and Setting Up Ollama

  • System requirements and hardware considerations
  • Installing Ollama on different operating systems
  • Configuring dependencies and environment setup

Running AI Models Locally

  • Downloading and loading AI models in Ollama
  • Interacting with models via the command line
  • Basic prompt engineering for local AI tasks

Optimizing Performance and Resource Usage

  • Managing hardware resources for efficient AI execution
  • Reducing latency and improving model response time
  • Benchmarking performance for different models

Use Cases for Local AI Deployment

  • AI-powered chatbots and virtual assistants
  • Data processing and automation tasks
  • Privacy-focused AI applications

Summary and Next Steps

Requirements

  • Basic understanding of AI and machine learning concepts
  • Familiarity with command-line interfaces

Audience

  • Developers running AI models without cloud dependencies
  • Business professionals interested in AI privacy and cost-effective deployment
  • AI enthusiasts exploring local model deployment
 7 Hours

Custom Corporate Training

Training solutions designed exclusively for businesses.

  • Customized Content: We adapt the syllabus and practical exercises to the real goals and needs of your project.
  • Flexible Schedule: Dates and times adapted to your team's agenda.
  • Format: Online (live), In-company (at your offices), or Hybrid.
Investment

Price per private group, online live training, starting from 1600 € + VAT*

Contact us for an exact quote and to hear our latest promotions

Upcoming Courses

Related Categories