1. Core Cognition: Why Does OpenClaw Need a Large Language Model?
Many people wonder why OpenClaw can complete various complex operations and achieve automated processes. The core answer is simple—it needs a powerful “brain” to command, and that brain is a large language model (LLM). OpenClaw itself is more like “hands and feet,” only responsible for executing specific instructions. As for what to do each step and how to operate, it all depends on the LLM to make decisions and plans. Without the support of an LLM, OpenClaw becomes a “soulless body” and cannot exert its full potential.
However, most high-quality LLMs on the market are paid. Not only is the usage cost high, but there may also be risks of sensitive data leakage—after all, data needs to be uploaded to third-party servers, making privacy security difficult to guarantee. Therefore, choosing a free, secure, and easy-to-deploy local LLM tool has become the best solution for OpenClaw users, and Ollama is exactly such a tool that perfectly meets the needs.
2. Selection Analysis: Why Choose Ollama? Two Core Advantages Solve Key Pain Points
Among many local LLM deployment tools, Ollama stands out because it solves two core pain points for novice users—paid costs and hardware thresholds. The specific advantages are as follows:
- Privacy and Security Guaranteed, Data Flows Locally Throughout: Ollama supports local deployment of LLMs. All operational data runs only on your own computer and will not be uploaded to any third-party servers. This fundamentally avoids the risk of sensitive data leakage and is more secure than paid cloud models.
- Suitable for Low-End PCs, Free Cloud Service as Backup: If your computer configuration is not powerful enough to run local LLMs smoothly, Ollama also provides completely free cloud service models. You don’t need to spend a penny, and even low-end PCs can call them easily. It balances practicality and economy, perfectly solving the problem of “wanting to use high-quality models without paying.”
Whether you are a novice or a user with privacy protection needs, Ollama is the best “brain” choice for matching OpenClaw. Next, we will detail the installation steps of Ollama on Windows systems, as well as the usage methods of local and cloud models.
3. Practical Steps (1): Install Ollama on Windows, Completed in Two Easy Steps
Installing Ollama on Windows is very simple, no complex code operations are required, and novices can get started quickly. Follow the specific steps below to avoid missing details:
- Download the Ollama Installer: First, open a browser on your computer, enter “Ollama” in the search engine, and find the official Ollama homepage; after entering the homepage, click the “Download” button in the upper right corner to enter the download page.
- Choose the Installation Method and Complete the Installation: The download page provides three operating system options, we choose “Windows”; there are two installation methods for Windows systems—Power Shell command installation and installer download installation. Since Power Shell command installation may have network interruption problems, it is more recommended for novices to click the “Download for Windows” button to download the installer.
- Launch the Installer: After the download is complete, find the installer file, double-click to open it, click the “Install” button in the installation interface, and wait for the installation to complete. The installation process takes a short time, and the specific time varies slightly according to the computer configuration. It can also be installed normally in a virtual machine.
Note: Ollama itself has low requirements for computer configuration, and most Windows computers can be installed normally; but if you want to run local LLMs later, there will be certain requirements for computer hardware (especially memory and graphics card). Low-end computers can give priority to cloud models.
4. Practical Steps (2): Use Ollama Local Model to Build a Local Brain for OpenClaw
After the installation is complete, Ollama will start automatically, and the Ollama running icon will appear in the lower right corner. Next, we can configure and use the local LLM to build a local “brain” for OpenClaw. The specific steps are as follows:
- Open the Ollama Interface: Click the Ollama icon in the lower right corner of the computer to open the running interface. There will be a “Select a model” drop-down list in the interface, which displays all available models.
- Select a Local Model: In the drop-down list, models without the “Cloud” logo are local models (need to be downloaded to run locally). Common ones include open-source models such as Qianwen and Deep Seek, which can be selected according to your computer configuration. For example, select the “Deep Seek R1 8B” model (suitable for medium-configured computers).
- Call the Local Model and Test: After selecting the model, send a test command in the input box (such as “Hello, introduce yourself”). When calling for the first time, the system will automatically download the model. The download time varies slightly according to the network speed and model size (after the first download, no need to download again for subsequent calls).
- Verify Model Operation: After the download is complete, the model will automatically process the command and return the result. At this time, you can confirm that the local model is running successfully, and it can be directly used to drive OpenClaw to achieve automated operations.
Note: The running speed of the local model depends on the computer configuration. The running speed may be slow on a virtual machine or low-end computer, which is a normal phenomenon; if the running is stuck, you can switch to the cloud model.
5. Practical Steps (3): Call Ollama Cloud Model, Run Smoothly on Low-End PCs
If your computer configuration is low and cannot run local LLMs smoothly, don’t worry. The free cloud model provided by Ollama can perfectly serve as a backup. The calling steps are as follows, which are free of charge and simple to operate:
- Select a Cloud Model: Open the Ollama running interface, and in the “Select a model” drop-down list, select a model with the “Cloud” logo (such as “gpt-oss:120b-cloud”). Such models do not need to be downloaded and run directly in the cloud.
- Register and Log in to Ollama Account: Calling cloud models requires registering an Ollama account. Click the “Sign In” button in the interface, and it will automatically open the official Ollama website; if you don’t have an account, click the registration link at the bottom of the website to complete the registration with your email, and log in to the account after registration.
- Select Model Operation Parameters: After logging in, return to the Ollama interface. After selecting the cloud model, three parameter options (“Low/Medium/High”) will appear. Novices can select “Medium” to balance speed and effect.
- Test the Cloud Model: Send a test command in the input box (such as “Hello, what LLM are you using”). The cloud model will quickly process and return the result, and the running speed is much faster than the local model on low-end computers. It is suitable for driving OpenClaw to complete complex automated operations.
- Restart Ollama (Optional): If you cannot call the cloud model normally after logging in, you can close the Ollama application window, search for “Ollama” in the Windows Start menu, and re-open the application to use it normally.
Note: Ollama’s free cloud model has certain usage limits, but for testing OpenClaw and completing basic automated operations, these limits can be completely ignored; if you need to break the limits, you can click the “Upgrade” link in the interface to upgrade to the paid version.
6. Conclusion: Ollama Deployment Completed, Prepare for OpenClaw Automation
According to the above steps, we have successfully completed the installation of Ollama, as well as the configuration and calling of local and cloud models. So far, the “brain” of OpenClaw has been built. In the follow-up, you only need to associate OpenClaw with Ollama to achieve smoother automated operations.
It should be noted that local models are suitable for users with privacy protection needs and high computer configurations; cloud models are suitable for low-end computers and users pursuing running speed, which can be flexibly selected according to their own situations. In the next issue, we will explain how to install OpenClaw on Ubuntu systems, allowing OpenClaw to officially call Ollama models to achieve local automated operations.
7. Demo Video
You can watch the following demo video by select the subtitle to your preferred subtitle language.