Classes

Agent

class phi_3_vision_mlx.Agent(toolchain=None, enable_api=True, **kwargs)

A flexible agent class for managing toolchains and executing prompts.

The Agent class provides a framework for processing prompts through a series of tools (functions) defined in a toolchain. It manages the execution flow, handles input and output, and maintains a log of operations.

Attributes:

_default_toolchainstr

A string defining the default toolchain, which includes adding code to prompts, generating responses, and executing code.

Methods:

__init__(self, toolchain=None, enable_api=True, **kwargs)

Initialize the Agent with a toolchain and other optional parameters.

__call__(self, prompt:str, images=None)

Process a given prompt (and optionally images) through the toolchain.

reset()

Reset the agent’s log and ongoing operations.

log_step()

Log the current step of operations.

end()

End the current session, log the final step, and reset the agent.

set_toolchain(s)

Set a new toolchain for the agent to use.

Usage:

The Agent can be used to process prompts through a defined series of operations: 1. Initialize an Agent with a custom toolchain or use the default. 2. Call the Agent with a prompt (and optionally images) to process. 3. The Agent will execute each tool in the toolchain, passing results between steps. 4. Results are logged at each step and can be accessed or saved.

The toolchain is a string defining a series of operations, where each line is of the form: ‘output1, output2, … = function_name(input1, input2, …)’

Example:

>>> agent = Agent()
>>> result = agent("Tell me about this image", images=["path/to/image.jpg"])
>>> print(result['responses'])

Notes:

  • The Agent’s behavior regarding API input handling is determined by the ‘enable_api’ parameter in the kwargs. This affects how the Agent processes certain prompts.

  • The toolchain can be customized to include different functions and processing steps.

  • The Agent maintains a log of all operations, which can be useful for debugging or analysis.

Functions

load

phi_3_vision_mlx.load(blind_model=False, quantize_model=False, quantize_cache=False, use_adapter=False, **kwargs)

Load a Phi-3 model with specified configuration.

Parameters:

blind_modelbool, optional

If True, load the language-only model. If False, load the vision model. Default is False.

quantize_modelbool, optional

If True, load the quantized version of the model. Default is False.

quantize_cachebool, optional

If True, use quantized cache for the model. Default is False.

use_adapterbool, optional

If True, load and use a LoRA adapter for the model. Default is False.

**kwargsdict

Additional keyword arguments to pass to the model loading function.

Returns:

tuple

A tuple containing the loaded model and processor.

Notes:

  • If the model path doesn’t exist, it will call _setup() to download or prepare the model.

  • The function uses predefined paths (PATH_*) to locate model files.

generate

phi_3_vision_mlx.generate(prompt, images=None, preload=None, blind_model=False, quantize_model=False, quantize_cache=False, use_adapter=False, max_tokens=512, verbose=True, return_tps=False, early_stop=False, stream=True, apply_chat_template=True, enable_api=False)

Generate text based on a given prompt, optionally with image input.

Parameters:

promptstr

The input prompt for text generation.

imageslist of str or None, optional

List of image paths or URLs to process along with the prompt.

preloadtuple or None, optional

A pre-loaded model and processor tuple. If None, a model will be loaded.

blind_modelbool, optional

If True, use the language-only model. Default is False.

quantize_modelbool, optional

If True, use the quantized version of the model. Default is False.

quantize_cachebool, optional

If True, use quantized cache for the model. Default is False.

use_adapterbool, optional

If True, use a LoRA adapter with the model. Default is False.

max_tokensint, optional

Maximum number of tokens to generate. Default is 512.

verbosebool, optional

If True, print additional information during generation. Default is True.

return_tpsbool, optional

If True, return tokens per second information. Default is False.

early_stopbool or int, optional

If True or an integer, stop generation early under certain conditions.

streambool, optional

If True, stream the generated text. Default is True.

apply_chat_templatebool, optional

If True, apply a chat template to the prompt. Default is True.

enable_apibool, optional

If True, enable API-related functionality. Default is False.

Returns:

str or tuple

Generated text, or a tuple containing generated text and additional information if return_tps is True.

Notes:

  • If ‘<|api_input|>’ is in the prompt and enable_api is True, it will call get_api() instead.

  • The function can handle both text-only and text-image inputs.

choose

phi_3_vision_mlx.choose(prompt, choices='ABCDE', images=None, preload=None, blind_model=False, quantize_model=False, quantize_cache=False, use_adapter=False, verbose=True, apply_chat_template=True)

Choose the best option from a set of choices for a given prompt.

It selects the most appropriate answer from a set of choices based on the input prompt.

Parameters:

promptstr or list of str

The input prompt(s) for which to choose an answer.

choicesstr, optional

A string containing the possible choices. Default is ‘ABCDE’.

imageslist or None, optional

List of image inputs for multimodal models. Default is None.

preloadtuple or None, optional

A tuple containing (model, processor) if already loaded. If None, the function will load the model and processor based on the provided configuration. Default is None.

blind_modelbool, optional

If True, uses a model without vision capabilities. Default is False.

quantize_modelbool, optional

If True, uses a quantized version of the model for reduced memory usage. Default is False.

quantize_cachebool, optional

If True, uses cache quantization for improved memory efficiency. Default is False.

use_adapterbool, optional

If True, uses a LoRA adapter with the model for fine-tuned behavior. Default is False.

verbosebool, optional

If True, print additional information during execution. Default is True.

apply_chat_templatebool, optional

If True, applies a chat template to the prompt before processing. Default is True.

Returns:

str or list of str

The chosen answer(s) from the provided choices. Returns a single string if the input prompt was a string, otherwise returns a list of strings.

Example:

>>> prompt = "What is the capital of France? A: London B: Berlin C: Paris D: Madrid E: Rome"
>>> result = choose(prompt)
>>> print(result)
'C'

constrain

phi_3_vision_mlx.constrain(prompt, constraints=[(0, '\nThe'), (100, ' The correct answer is'), 'ABCDE'], images=None, preload=None, blind_model=False, quantize_model=False, quantize_cache=False, use_adapter=False, verbose=True, apply_chat_template=True, use_beam=False)

Perform constrained decoding on the given prompt using specified constraints.

This function generates text based on the input prompt while adhering to the specified constraints. It supports various model configurations and can handle both text and image inputs.

Parameters:

promptstr or list of str

The input prompt(s) for text generation.

constraintslist, optional

List of constraints. Each constraint can be: - A tuple (max_tokens, constraint_text): Specifies the maximum number of tokens to generate

before the constraint_text must appear.

  • A string: Triggers the use of _choose_from() to select from the given options.

imageslist or None, optional

List of image inputs for multimodal models. Default is None.

preloadtuple or None, optional

A tuple containing (model, processor) if already loaded. If None, the function will load the model and processor based on the provided configuration. Default is None.

blind_modelbool, optional

If True, uses a model without vision capabilities. Default is False.

quantize_modelbool, optional

If True, uses a quantized version of the model for reduced memory usage. Default is False.

quantize_cachebool, optional

If True, uses cache quantization for improved memory efficiency. Default is False.

use_adapterbool, optional

If True, uses a LoRA adapter with the model for fine-tuned behavior. Default is False.

verbosebool, optional

If True, print additional information during execution. Default is True.

apply_chat_templatebool, optional

If True, applies a chat template to the prompt before processing. Default is True.

use_beambool, optional

If True, uses beam search for generation instead of greedy decoding. Default is False.

Returns:

str or list of str

The generated text(s) adhering to the specified constraints. Returns a single string if the input prompt was a string, otherwise returns a list of strings.

Notes:

  • The function preprocesses the prompt and applies the constraints sequentially.

  • It uses either a custom constrained generation algorithm or beam search to ensure the output adheres to the constraints.

  • When a constraint is a string, it uses _choose_from() to select from the given options.

  • The output format matches the input format (str or list of str).

  • If apply_chat_template is True, the prompt is processed through a chat template before generation.

Example:

>>> prompt = "What is the capital of France?"
>>> constraints = ...
>>> result = constrain(prompt, constraints)
>>> print(result)
'The capital of France is Paris. The correct answer is C'

execute

phi_3_vision_mlx.execute(code_strings, file_prefix=0, verbose=True)

Execute one or more Python code strings and capture the results.

Parameters:

code_stringsstr or list of str

A single code string or a list of code strings to execute.

file_prefixint or str, optional

A prefix to use for naming output files. Default is 0.

verbosebool, optional

If True, print execution results. Default is True.

Returns:

dict

A dictionary containing lists of execution results: - ‘codes’: The input code strings - ‘files’: Names of any files generated during execution - ‘souts’: Standard output from each execution - ‘serrs’: Standard error from each execution

Notes:

  • Each code string is executed in a separate environment.

  • The function captures standard output, standard error, and any generated files.

  • If verbose is True, execution results are printed to the console.

train_lora

phi_3_vision_mlx.train_lora(model_path='models/phi3_mini_128k_Q', adapter_path=None, lora_targets=['self_attn.qkv_proj'], lora_layers=1, lora_rank=1, epochs=1, batch_size=1, take=10, lr=0.0001, warmup=0.5, mask_ratios=None, dataset_path='JosefAlbers/akemiH_MedQA_Reason')

Train a LoRA (Low-Rank Adaptation) model using the specified parameters.

This function loads a pre-trained model, applies LoRA adaptations, and fine-tunes it on a given dataset. It supports various training configurations, including masking strategies and learning rate scheduling.

Parameters:

model_pathstr, optional

Path to the base model. Defaults to PATH_QUANTIZED_PHI3_BLIND.

adapter_pathstr or None, optional

Path to save the LoRA adapter. If None, it’s set to ‘{PATH_ADAPTERS}/{model_path}’. Defaults to None.

lora_layersint, optional

Number of layers to apply LoRA. Defaults to 1.

lora_rankint, optional

Rank of the LoRA adapter. Defaults to 1.

epochsint, optional

Number of training epochs. Defaults to 1.

batch_sizeint, optional

Batch size for training. Defaults to 1.

takeint, optional

Number of samples to take from the dataset. Defaults to 10.

lrfloat, optional

Learning rate for the optimizer. Defaults to 1e-4.

warmupfloat, optional

Fraction of total steps to use for learning rate warmup. Defaults to 0.5.

mask_ratioslist of float or None, optional

Ratios for input masking. If None, no masking is applied. Defaults to None.

dataset_pathstr, optional

Path to the dataset used for training. Defaults to “JosefAlbers/akemiH_MedQA_Reason”.

Returns:

None

The function doesn’t return a value but saves the trained LoRA adapter to the specified path.

Notes:

  • The function uses several helper methods for data processing, loss calculation, and training.

  • It applies a learning rate schedule with warmup.

  • If mask_ratios are provided, it applies input masking during training.

  • The function uses AdamW optimizer for training.

  • After training, it cleans up by deleting the model and processor to free memory.

Example:

>>> train_lora(lora_layers=5, lora_rank=16, epochs=10,
...            take=10, batch_size=2, lr=1e-4, warmup=.5,
...            dataset_path="JosefAlbers/akemiH_MedQA_Reason")

test_lora

phi_3_vision_mlx.test_lora(model_path='models/phi3_mini_128k_Q', adapter_path=True, dataset_path='JosefAlbers/akemiH_MedQA_Reason', take=(0, 10), batch_size=1, test_result_path='test_result.csv')

Test a LoRA (Low-Rank Adaptation) model on a given dataset using various generation methods.

This function loads a model and its LoRA adapter (if specified), processes a dataset, and evaluates the model’s performance on recall (summarization) and answer generation tasks using different methods.

Parameters:

model_pathstr, optional

Path to the base model. Default is PATH_QUANTIZED_PHI3_BLIND.

adapter_pathbool or str, optional

Path to the LoRA adapter. If True, it’s set to ‘{PATH_ADAPTERS}/{model_path}’. If None, the model without adapter is tested. Default is True.

dataset_pathstr, optional

Path to the dataset to be used for testing. Default is “JosefAlbers/akemiH_MedQA_Reason”.

taketuple of int or int, optional

Range of samples to take from the dataset. If tuple, in the format (start, end). If int, takes (0, take) samples. Default is (0, 10).

batch_sizeint, optional

Number of samples to process in each batch. Default is 10.

Returns:

None

The function prints the evaluation results but doesn’t return any value.

Notes:

  • Performs three tasks: recall of trained texts, answer generation using _choose_from(), and answer generation using constrained decoding.

  • For recall, it generates a summary and compares it with the true summary.

  • For answer generation, it uses three methods: 1. _choose_from(): Chooses an answer from options A-E. 2. _constrain(): Generates an answer with specific constraints. 3. _beam(): Generates an answer using beam search with constraints.

  • Prints comparisons between generated and true responses for each task.

  • After completion, prints scores for all answer generation methods.

  • The model and processor are deleted after use to free up memory.

Example:

>>> test_lora(model_path="path/to/model", adapter_path="path/to/adapter",
...           dataset_path="dataset/path", take=(0, 10), batch_size=2)

benchmark

phi_3_vision_mlx.benchmark(blind_model=False, json_path='benchmark.json')

Perform a benchmark test on different model configurations and save the results.

This function tests various configurations of a language model (vanilla, quantized model, quantized cache, and LoRA) on a set of predefined prompts. It measures the performance in terms of tokens per second (TPS) for both prompt processing and text generation.

Parameters:

blind_modelbool, optional

If True, uses a ‘blind’ version of the model (details depend on implementation). Defaults to False.

Returns:

None

The function doesn’t return a value but saves the benchmark results to a JSON file and prints a formatted version of the results.

Behavior:

  1. Defines a set of test prompts, including text-only and image-text prompts.

  2. Tests four configurations: vanilla, quantized model, quantized cache, and LoRA.

  3. For each configuration: - Loads the model with appropriate settings. - Processes each prompt and generates text. - Measures TPS for prompt processing and text generation.

  4. Saves all results to ‘benchmark.json’.

  5. Calls a function to format and print the benchmark results.

Notes:

  • The function uses predefined prompts, including a mix of text-only and image-text tasks.

  • It generates 100 tokens for each prompt.

  • The results are stored in a dictionary with keys ‘vanilla’, ‘q_model’, ‘q_cache’, ‘lora’.

  • Each result entry contains the prompt index, prompt TPS, and generation TPS.

  • The function cleans up resources by deleting the model after each configuration test.

  • Requires ‘generate’, ‘load’, and ‘_format_benchmark’ functions to be defined elsewhere.

Example:

>>> benchmark()
# This will run the benchmark and save results to 'benchmark.json',
# then print a formatted version of the results.
>>> benchmark(blind_model=True)
# Runs the benchmark using the 'blind' version of the model (i.e., Phi-3-Mini-128K)

chat_ui

phi_3_vision_mlx.chat_ui(agent=None)

Create and launch a chat user interface using Gradio.

This function sets up an interactive chat interface that allows users to communicate with an AI agent. It supports text input and file uploads (specifically images) and displays the conversation history.

This function is also the entry point for the ‘phi3v’ command-line tool, which can be run directly from the terminal after installing the phi-3-vision-mlx package.

Parameters:

agentAgent, optional

An instance of the Agent class to handle the chat logic. If None, a new Agent instance is created. Default is None.

Returns:

None

The function launches a Gradio interface and doesn’t return a value.

Behavior:

  1. Initializes the chat agent if not provided.

  2. Defines helper functions for message handling and bot responses: - add_message: Adds user messages (text and files) to the chat history. - bot: Processes user input through the agent and formats the response. - reset: Resets the conversation and clears the chat history.

  3. Creates a Gradio Blocks interface with the following components: - Chatbot: Displays the conversation history. - MultimodalTextbox: Allows text input and file uploads. - Reset button: Clears the conversation.

  4. Sets up event handlers for user input submission and bot responses.

  5. Launches the Gradio interface in the browser.

Notes:

  • The interface supports both text and image inputs.

  • Bot responses are processed to remove ‘<|end|>’ tokens and empty lines.

  • The chat history keeps track of user inputs and bot responses, including file uploads.

  • The interface is set to occupy 80% of the viewport height.

  • The Gradio footer is hidden using custom CSS.

  • The interface is launched in-browser and inline.

Dependencies:

  • Requires the Gradio library for creating the user interface.

  • Assumes the existence of an Agent class that handles the chat logic.

Command-line Usage:

After installing the phi-3-vision-mlx package, you can run this function directly from the terminal using:

$ phi3v

This will launch the chat interface in your default web browser.

Example:

>>> chat_ui()
# This will launch the chat interface in the default web browser.
>>> custom_agent = Agent(custom_params)
>>> chat_ui(agent=custom_agent)
# Launches the chat interface with a custom agent configuration.

get_api

phi_3_vision_mlx.get_api(prompt, n_topk=1, verbose=True)

Retrieve and format API code based on input prompts using vector similarity search.

This function uses a Vector Database (VDB) to find the most relevant API code for given prompts. It’s designed to work with prompts that may contain the ‘<|api_input|>’ delimiter to separate the API request from additional input.

Parameters:

promptstr or list of str

The input prompt(s) to search for relevant API code. If a prompt contains ‘<|api_input|>’, the part before it is used for the search, and the part after it is used to format the retrieved code.

n_topkint, optional

The number of top matching API codes to retrieve for each prompt. Default is 1.

verbosebool, optional

If True, print the obtained API codes. Default is True.

Returns:

list of str

A list of formatted API code strings relevant to the input prompt(s).

Notes:

  • The function uses a VDB (Vector Database) for similarity search.

  • If multiple prompts are provided, it returns a list of API codes for each prompt.

  • The retrieved API code is formatted with the part of the prompt after ‘<|api_input|>’.

  • This function is typically used within an Agent’s toolchain for API-related tasks.

Example:

>>> agent = Agent(toolchain="responses = get_api(prompt)")
>>> agent('Draw <|api_input|> A perfectly red apple, 32k HDR, studio lighting')
# This will retrieve and format API code for image generation based on the given prompt.

In this example, ‘Draw’ is used for the API search, and ‘A perfectly red apple, 32k HDR, studio lighting’ is used to format the retrieved API code.

add_code

phi_3_vision_mlx.add_code(prompt, codes)

Append Python code blocks to a given prompt.

Parameters:

promptstr

The original prompt text.

codeslist of str or None

A list of Python code strings to be appended to the prompt.

Returns:

str or list of str

If codes is None, returns the original prompt. Otherwise, returns a list of strings, each containing the original prompt followed by a Python code block.

add_text

phi_3_vision_mlx.add_text(prompt)

Append context to a given prompt by loading text from a URL or file.

This function takes a prompt string or a list of prompts, each potentially containing a URL or file path after an ‘@’ symbol. It loads the text from the specified source and appends it to the corresponding prompt.

Parameters:

promptstr or list of str

A single prompt string or a list of prompt strings. Each prompt can optionally include an ‘@’ followed by a URL or file path to load additional context from.

Returns:

str or list of str

If the input is a single string, returns a single processed string. If the input is a list, returns a list of processed strings.

Notes:

  • The function splits each prompt at the ‘@’ symbol if present.

  • Text after the ‘@’ is treated as a URL or file path to load additional context from.

  • The loaded context is appended to the original prompt.

  • If no ‘@’ is present, the original prompt is returned unchanged.

  • The function uses _load_text() to retrieve content from URLs or files.

Example:

>>> add_text('How to inspect API endpoints? @ https://raw.githubusercontent.com/gradio-app/gradio/main/guides/08_gradio-clients-and-lite/01_getting-started-with-the-python-client.md')
# Returns the original question followed by the content of the specified url address

rag

phi_3_vision_mlx.rag(prompt, repo_id='JosefAlbers/sharegpt_python_mlx', n_topk=1)

Perform Retrieval-Augmented Generation (RAG) on given prompts using a vector database.

This function takes a prompt or list of prompts, retrieves relevant context from a specified dataset using vector similarity search, and combines the retrieved context with the original prompts.

Parameters:

promptstr or list of str

A single prompt string or a list of prompt strings to process.

repo_idstr, optional

The Hugging Face dataset repository ID to use for context retrieval. Default is “JosefAlbers/sharegpt_python_mlx”.

n_topkint, optional

The number of top matching contexts to retrieve for each prompt. Default is 1.

Returns:

str or list of str

If the input is a single string, returns a single processed string. If the input is a list, returns a list of processed strings. Each processed string contains the retrieved context followed by the original prompt.

Notes:

  • The function uses a Vector Database (VDB) to perform similarity search on the dataset.

  • Retrieved contexts are combined with the original prompts in a specified format.

  • The function is designed to work with the chat_template format used in the phi-3 model.

Example:

>>> rag('Comparison of Sortino Ratio for Bitcoin and Ethereum.')
# Returns a string containing relevant context about Sortino Ratio, Bitcoin, and Ethereum,
# followed by the original prompt