Posts

Top 5 Tips to Improve API Performance

In this post,  I’ll explore 5 ways to improve web API performance. 1. Pagination When you have a lot of data to send, brake it to smaller chunks or pages. This keeps your service fast, and users won't have to wait a long time for the data. Imagine you have a  huge list of items  (like thousands of products, blog posts, or comments). If you try to send all of them at once, y our service becomes slow  because it has to process and send a lot of data. And u sers have to wait a long time   to see anything. So, to keep your service fast and efficient, which will allow u sers to get the data   quickly   without waiting forever,  break the data into smaller chunks or pages . For example: Show  10 items per page .  Let users click "Next" or "Previous" to see more. 2. Async logging Normally, when you log data (like errors or messages), it gets written to the disk (e.g., a file)  immediately . This can slow things down because writing to disk...

Deleting a Model from Ollama locally: A Step-by-Step Guide

Image
In this post, I'll walk you through the process of removing a model from Ollama on your local machine. Ollama is an open-source platform that allows you to run large language models (LLMs) like Llama, Mistral, and DeepSeek locally. To completely remove a model, we'll use a specific command that ensures it's deleted from your system. Let’s get started! ๐Ÿš€ To see all available commands in Ollama, we can use the -h  command. ollama -h Next, let's display the locally installed models available in Ollama on our machine. ollama list Finally, let’s delete the model we want to remove locally from Ollama. ollama rm deepseek-r1:14b Please remember to replace 'deepseek-r1:14b' with the name of the model you want to delete. Thank you for reading! If you found this article helpful, please share it with your friends and leave your thoughts in the comments. ๐Ÿ˜Š