Running Deepseek-r1 7b distilled model locally in a PC with no GPU with Ollama.

This is my first time trying to run a model locally. With the help of Reddit I could do it today. It helped to understand how AI works.

The video is sped up by 3x, as the model was running on i7 processor and 8GB Ram, the output was bit slow.