AI Assistant
AI Assistant with the help of Large Language Models (LLMs) can transform how you use CoCalc for learning, writing programs, and writing scientific documents. Below are some areas where CoCalc’s context-sensitive AI Assistant can save you time and effort.
LLM Choice and Billing
CoCalc allows you to use a lot of different LLMs from different providers:
Naturally, this choice may be overwhelming! Which one should you choose?.. The answer depends too much on what exactly you are doing as well as on your personal preferences. We do provide a short description for each model, but there is no replacement to personal experience, so try a few! It may be useful to ask different models to do exactly the same task and see which one you like most. You also can alter your choice at any time, so do not worry about making “the right one!”
Apart from the quality of answers, the models differ by speed and price. CoCalc covers the cost of the least expensive ones, so they are free for our users, but more advanced LLMs are provided for a fee based on their cost to us. A typical interaction with a paid LLM costs from a fraction of a cent to a few cents - it is impossible to know the exact amount in advance since it depends on the length of both input and output. To see your exact charges go to https://cocalc.com/settings/purchases and hover over any one to see a more precise amount than just cents:
What Is Sent to LLMs and How Is My Data Used?
CoCalc does NOT have any automatic/background communication with LLMs. Your code, requests, and documents are sent to LLMs ONLY when you explicitly ask for it. Our AI Assistant tries to send the appropriate piece of your work to the chosen LLM, but you have full control over it and can preview what will be sent:
CoCalc has commercial agreements with LLM providers, the data provided by our users are kept private and are not used for training models.
You can also spin off a fully private LLM on your very own Compute Servers! In this video William shows how to create a GPU-backed OpenWebUI server in a few minutes:
Jupyter Notebooks
There are a number of ways to call AI Assistant from a Jupyter notebook:
You can ask it to generate new code, perhaps based on what is already in your notebook, explain or modify existing code to make it better or at least running, even translate it into a different language!
In most cases the response from AI Assistant appears in Side Chat to avoid unwanted modifications of your document:
It is very easy to copy-paste generated code back to your notebook if you are happy with it, but you can also run it directly in the chat to test how it works. If you are not happy with the answer, you can regenerate the response, perhaps using a more advanced model, or you can continue the conversation in the chat:
Have you ever been using a Jupyter notebook and got an error message? You can now click a button and AI Assistant will try to figure out how to fix the error:
Linux Terminal and Shell Scripts
Linux Terminal is extremely powerful in combination with AI Assistant, because it can help you write a script for any command that can be invoked from the command shell. For example:
Some other requests to inspire you:
replace ‘x’ by ‘y’ in all files
how can I use pari/gp to compute the number of primes up to 2023
I am using psql to query a table with a column called “time”. I would like to make a table showing the number of entries in my table for each of the last 7 days.
Editing Python, R, and Other Files
CoCalc Frame Editor includes AI Assistant button for programming language file types such as .py, .R, .pl, .c, and others, so you can use it in the same way as with Linux terminals!
Typeset Scientific Content with LaTeX
In addition to general help with LaTeX, including fixing errors, you can describe a formula and have AI Assistant turn it into LaTeX code. In a LaTeX editor use menu Insert > AI Generated Formula:
Or you can ask AI Assistant to start writing the whole document for you:
LLMs in Chat Rooms and Side Chat
Using AI Assistant buttons has the advantage of automatically linking appropriate context, but you can also directly call any LLM in a chat room or in the side chat next to an open file, you can do an @-mention and enter your question there:
After getting a response, you can continue the conversation: