Nvidia gives users custom LLMs

Their new release, Chat with RTX, leverages the users' files in coordination with the chosen model.

The choice of models is slim, with Llama2 and Mistral being the only available options.

The demo video looks interesting, with how easily you can select and interact with files.

However, it comes across as underwhelming, especially as the specs needed for this 35GB app are:

•⁠ ⁠Windows 11

•⁠ ⁠30 or 40 Series GPU

•⁠ ⁠16GB+ RAM

When considering all the available tools, RTX Chat doesn't seem as impressive.

Especially with more accessible and maybe even better alternatives such as Open-Interpeter, which I recently talked about. -> https://www.linkedin.com/posts/luka-anicin_naturallanguageprocessing-machinelearning-activity-7155537188955303937-OmZP

Just because a product/service has a brand doesn't always mean it's good. In the age of AI, everyone can provide value, whether a company or a GitHub user.

It's important to differentiate between hype and actual genuine value that can be provided to users.

Previous
Previous

Big tech companies plan on fighting misinformation, hopefully

Next
Next

Hugging Face makes a push for open-source custom models