Skip to content

Enola-AI es una plataforma avanzada de GenAI diseñada para validar y monitorear la robustez de los modelos de inteligencia artificial en industrias altamente reguladas como finanzas, salud y educación.

License

Notifications You must be signed in to change notification settings

HuemulSolutions/Enola-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 

Repository files navigation

Enola-AI Python Library Documentation

Table of Contents

  1. Introduction
  2. Features
  3. Requirements
  4. Installation
  5. Getting Started
  6. Understanding how Enola-AI works
  7. Documentation: Sending Data to Enola-AI
  8. Summary
  9. Contributing
  10. License
  11. Contact

1. Introduction

Enola AI is an advanced GenAI platform designed to validate and monitor the robustness of artificial intelligence models in highly regulated industries such as finance, healthcare, and education. Our solution ensures that AI implementations comply with strict regulatory standards through continuous assessments, seamless integrations, and real-time monitoring.

This documentation provides a comprehensive guide on how to use the Enola-AI Python library to interact with the platform. You will learn how to install the library, understand the types of steps used in tracking, send different types of data, and utilize various features to enhance your AI model's performance and compliance.


2. Features

  • Multilevel Evaluation: Collect feedback from users, automated assessments, and reviews from internal experts.
  • Real-Time Monitoring: Continuous monitoring capabilities to detect deviations in AI model behavior.
  • Seamless Integration: Compatible with existing infrastructures such as ERP systems, CRM platforms, and data analytics tools.
  • Customizable Configuration: Adapt the evaluation methodology according to the specific needs of the client.
  • Security and Compliance: Advanced security measures and compliance with data protection regulations.

3. Requirements

  • Python 3.7+
  • Enola API Token

4. Installation

Before installing the Enola-AI Python library, ensure that you have Python 3.7 or higher installed.

Install the SDK via pip

In your Command Prompt (Windows) or Terminal (Linux/macOS) type:

pip install enola

By doing a pip install, you have the Enola-AI Python library and its dependencies automatically installed on your device.

Note: Advanced users may prefer to use a Virtual Environment to manage dependencies and isolate the Enola-AI library from other Python packages.


5. Getting Started

To start using the Enola-AI Python library, follow the steps below to initialize tracking in your application.

5.1 Initializing Tracking

To connect to Enola and initialize tracking, you will need:

  • A token provided by Enola-AI. This token is essential for authentication and authorization purposes.
  • A Python script. You can start by creating an empty Python file with a .py extension (e.g., enola_sample.py, enola_script.py).

Steps:

Step 1: Load the Enola API Token

You can load the token from a .env file (recommended):

This method is recommended due to better security for token management.

  • Go to the same directory where your Python file is located.

  • Create a file named .env in the same directory as your Python script.

  • Open the file with a text editor and add this line:

    ENOLA_TOKEN=your_api_token
    
  • Replace your_api_token with your Enola API token.

  • Ensure that your Python file and the .env file are in the same directory so that the token can be loaded.


Alternatively, you can set it directly in your script:

This method is easier and fine for testing purposes, but it is not recommended because the token is exposed in the script.

token = 'your_api_token'

You can also load the token using Environment Variables:

Another method is by setting your Enola API token as an environment variable. This method (recommended for advanced users) offers more flexibility but requires more complex configurations, depending on your operating system.


Step 2: Import the Necessary Libraries

Assuming you are loading the token from the .env file, in your Python script, start by importing the necessary libraries:

# Import necessary libraries
from enola.tracking import Tracking     # Enola Tracking module
from dotenv import load_dotenv          # .env Loader
import os                               # For environment variable access

Step 3: Define User Input

# Define the user input message
user_input = "Hello, what can you do?"  # Input message from the user

Step 4: Load the Enola API Token

# Load .env file
load_dotenv()
# Set up your token
token = os.getenv('ENOLA_TOKEN')

Step 5: Initialize the Tracking Agent

# Initialize the tracking agent
monitor = Tracking(
    token=token,              # Your Enola API token
    name="My Enola Project",  # Name of your tracking session
    is_test=True,             # Set to True if this is a test session
    app_id="my_app_id_01",    # Application ID
    user_id="user_123",       # User ID
    session_id="session_456", # Session ID
    channel_id="console",     # Channel ID (e.g., 'console', 'web', 'mobile')
    ip="192.168.1.1",         # IP address of the client
    message_input=user_input  # Input message from the user
)

Step 6: Create a New Step

# Create a step
step_chat = monitor.new_step("User LLM Question")

Step 7: Add Extra Information

Add any additional information relevant to the step, such as the user's question.

# Add user's question
step_chat.add_extra_info("UserQuestion", user_input)

Step 8: Process the User Input with the Language Model

Simulate the model generating a response to the user's question.

# Simulated model response
model_response = "I'm here to assist you in finding the help or information you need."

Note: You can replace the simulated response with an actual model response (e.g., GPT-4, Ollama, BERT). You can check our user guide to build a chatbot using Ollama by visiting our section Building an Ollama Chatbot.

Step 9: Add the Model's Response to the Step

# Add model's response
step_chat.add_extra_info("ModelResponse", model_response)

Step 10: Close the LLM Step

Indicate that the step has completed successfully and include token usage and costs.

# Close the LLM Step with close_step_token
monitor.close_step_token(
    step=step_chat,
    successfull=True,
    message_output=model_response,
    token_input_num=12,       # Number of input tokens (estimated)
    token_output_num=15,      # Number of output tokens (estimated)
    token_total_cost=0.0025,  # Total cost (example)
    token_input_cost=0.001,   # Cost for input tokens
    token_output_cost=0.0015  # Cost for output tokens
)

Step 11: Execute the Tracking

Send the tracking data to the Enola-AI server.

# Execute the tracking and send the data to Enola-AI server
monitor.execute(
    successfull=True,
    message_output=model_response,
    num_iteratons=1
)

5.2 Complete Example: Basic Tracking Initialization

Here's the complete code incorporating all the steps:

# Import necessary libraries
from enola.tracking import Tracking
from dotenv import load_dotenv
import os

# Define the user input message
user_input = "Hello, what can you do?"  # Input message from the user

# Load .env file and set up your token
load_dotenv()
token = os.getenv('ENOLA_TOKEN')

# Initialize the tracking agent
monitor = Tracking(
    token=token,
    name="My Enola Project",  # Name of your tracking session
    is_test=True,             # Set to True if this is a test session
    app_id="my_app_id_01",    # Application ID
    user_id="user_123",       # User ID
    session_id="session_456", # Session ID
    channel_id="console",     # Channel ID (e.g., 'console', 'web', 'mobile')
    ip="192.168.1.1",         # IP address of the client
    message_input=user_input  # Input message from the user
)

# Create a step
step_chat = monitor.new_step("User LLM Question")

# Add user's question
step_chat.add_extra_info("UserQuestion", user_input)

# Simulated model response
model_response = "I'm here to assist you in finding the help or information you need."

# Add model's response
step_chat.add_extra_info("ModelResponse", model_response)

# Close the LLM Step
monitor.close_step_token(
    step=step_chat,
    successfull=True,
    message_output=model_response,
    token_input_num=12,
    token_output_num=15,
    token_total_cost=0.0025,
    token_input_cost=0.001,
    token_output_cost=0.0015
)

# Execute the tracking and send the data to Enola-AI server
monitor.execute(
    successfull=True,
    message_output=model_response,
    num_iteratons=1
)

After initializing the tracking agent and executing it, you should get a console output like this:

2024-10-30 09:43:29,909 WELCOME to Enola...
2024-10-30 09:43:29,909 authorized...
2024-10-30 09:43:29,909 STARTED!!!
My Enola Project: sending to server...
My Enola Project: finish OK!

This means you have successfully connected to Enola-AI and sent the data to the servers.


6. Understanding how Enola-AI works

To better understand how Enola-AI works and what has to offer, it is important that you understand its different features. This section is going to be divided in 4 subcategories:


6.1. Track Activity

When using Enola-AI, you can add Track Activity, allowing you to track any interactions from the system and users. By tracking the interactions in your system with Enola-AI, you can effectively monitor, validate and evaluate your models.

6.1. Track Activity


6.2. Steps in Enola-AI

In Enola-AI, the concept of steps is fundamental for tracking the execution flow of your AI agents. Each step represents a significant action or event in your agent's processing pipeline.

There are two main types of steps:

  • Generic Steps: Used for general-purpose tracking of actions that are not specific to language models, such as data retrieval, preprocessing, or any custom logic.

  • LLM Steps: Specifically designed for tracking interactions with Language Models (e.g., GPT-4, Ollama, BERT), where token usage and costs are relevant.

Understanding the difference between these step types is crucial for accurate tracking and cost analysis.

6.2. Steps in Enola-AI


6.3. Feedback Evaluation

Enola-AI provides a feedback system that allows users to evaluate AI agent executions. Feedback can be submitted either through the Enola-AI platform or programmatically via code. This helps in assessing the performance of your AI agents and gathering user insights for improvement.

6.3. Feedback Evaluation


6.4. Extracting Information

Enola-AI allows you to retrieve the Input and Output data from previous executions for analysis or further manual and automatic processing.

6.4. Extracting Information


7. Documentation: Sending Data to Enola-AI

In this section, you will find complete documentation about sending data to the Enola-AI servers, this guide includes step-by-step instructions and code examples, along with explanations of the system's functionalities.

For the complete documentation, you can visit our guide Sending Data to Enola-AI covering the following sections:


8. Summary

  • This documentation provides a basic guide on using the Enola-AI Python library to initialize tracking and send data.
  • For detailed documentation about sending data with Enola-AI, you can visit our Sending Data to Enola-AI guide.

8.1. Building an Ollama Chatbot

  • For a comprehensive guide on how to build a Chatbot using Ollama with Enola-AI Tracking, you can visit our section Building an Ollama Chatbot.

8.2. Frequently Asked Questions

8.3. Complete Code Examples


9. Contributing

Contributions are welcome! Please open an issue or submit a pull request for any improvements or corrections.

When contributing, please ensure to:

  • Follow the existing coding style.
  • Write clear commit messages.
  • Update documentation as necessary.
  • Ensure that any code changes are covered by tests.

10. License

This project is licensed under the BSD 3-Clause License.


11. Contact

For any inquiries or support, please contact us at [email protected].

About

Enola-AI es una plataforma avanzada de GenAI diseñada para validar y monitorear la robustez de los modelos de inteligencia artificial en industrias altamente reguladas como finanzas, salud y educación.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •