When you use an API, such as OpenAI, the main disadvantage is that you must pay for every API call. Therefore, using voice commands to control the UI should be limited to paying customers, and there should be rate limits in place to keep costs under control. You might use open-source models like LLaMA to run the AI on your own server, but that would require better computational and memory resources than you currently have.
17.02.2025: Open source AI models like DeepSeek open the door to self hosted AI. You will need a powerful server with lots of RAM and GPU. Such servers might cost more than AI API calls if you use a cloud server. One solution might be to have your own physical server to run the AI model and use the cloud server for the web app, which makes API calls to the AI on your server.
No comments:
Post a Comment