Function Repository Resource:

#
OllamaSynthesize

Interact with local AI/LLM models via an Ollama server

ResourceFunction["OllamaSynthesize"][ generates a AI model response for the given | |

ResourceFunction["OllamaSynthesize"][ generates a AI model response for the given | |

ResourceFunction["OllamaSynthesize"][ generates a AI model response for the prompts and images in the |

## Details and Options

## Examples

### Basic Examples (5)

Try a basic question:

In[1]:= |

Out[1]= |

Ask a question about an image:

In[2]:= |

Out[2]= |

Ask another question about a different image:

In[3]:= |

Out[3]= |

A similar question with a different vision-enabled LLM model:

In[4]:= |

Out[4]= |

Mix the question and image(s) in a single list:

In[5]:= |

Out[5]= |

### Scope (1)

Solve basic math problems with step by step instructions:

In[6]:= |

Out[6]= |

### Options (3)

The default model is "Llava". Use the "OllamaModel" option to specify another one:

In[7]:= |

Out[7]= |

Larger models work too, but are slower. Also your machine will need to have sufficient GPU memory to run the model:

In[8]:= |

Out[8]= |

If you specify a model that does not exist or a model that you did not download locally, then an error is raised:

In[9]:= |

Out[9]= |

### Possible Issues (3)

Repeated calls to OllamaSynthesize will give slightly randomized results and sometimes these results can be wildly incorrect:

In[10]:= |

Out[10]= |