Software Architect / Microsoft MVP (AI) and Technical Author

AI, LLM, Machine Learning

Trying Out Ollama for Windows (Preview)

What Is Ollama?

Ollama is a free tool that allows you to run open-source large language models (LLMs) locally on your machine.

For example, you can install Ollama and run Metas Llamma2 model.

You can then interact with the LLM knowing that data is on your machine and not being published to the cloud.

 

Other reasons you might choose to run an LLM locally include:

  • More control over the hardware, trained model, data, and software you use to run the service.
  • Lower costs if you already have the necessary hardware.
  • Reduced latency.
  • Not subject to oversight or withdrawal of service.

 

~

Downloading Ollama for Windows Preview

Get Ollama for Windows here.

After downloading and installing it, you can check it its running by browsing to the following: http://localhost:11434/:

~

Pulling a Model

You need to fetch a model.  There are many on the Ollama website.

Open the terminal and type: ollama run llama2:

 

 

This will pull Metas llams2 model. It takes a while (3.8GB).

~

Testing

After the installation, I carried out a few tests using my personal laptop.  It has the following spec:

  • 16GB
  • Intel Core i7 8th Gen
  • NVIDIA GeForce MX150

 

I’ve had it for a few years.

Hello

A simple “hello” took 4 minutes in total to process:

Creating Code

Asking to create a “hello world” HTML page took my laptop 8 minutes to process:

Task Manager

My poor laptop doesn’t handle running this when parsing data:

~

Remove a Model

To remove the model, use the following command:

ollama rm llama2

 

That’s it!

~

Learn More

You can learn more about Ollama for Windows at the following:

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

Leave a Reply