This project is an AI-powered Magic 8 Ball web application. The backend is built using FastAPI and the frontend is built using React. The application utilizes a language model, specifically Phi-3-mini-4k-instruct from Ollama, to generate Magic 8 Ball-style responses.
- Ask the AI-powered Magic 8 Ball any question, and it responds with short, cryptic answers.
- Uses Ollama's Phi3 model to generate responses.
- Simple UI to input questions and receive AI-generated responses in real-time.
- Frontend: Built using React and styled with Tailwind CSS.
- Backend: Built using FastAPI, handles API request and communicates with the AI model.
- AI Model: The model is served locally using Ollama, and integrated with the backend for generating responses.
Ensure Docker and Docker Compose are installed on your machine.
-
Build the Containers:
To build both the backend and frontend services along with Ollama, use the following command:
docker-compose build
-
Start the Containers:
To run both the backend and frontend services along with Ollama, use the following command:
docker-compose up
- Start Ollama Server:
ollama serve
- Run model: If you don't have the model in local it will automatically first pull the model and then run it.
ollama run phi3
-
Clone the repository:
git clone https://github.com/yourusername/Magic8-Ball.git cd Magic8-Ball/backend
-
Create a virtual environment and install dependencies:
python -m venv venv source venv/bin/activate # On Windows use: venv\Scripts\activate pip install -r requirements.txt
-
Run the FastAPI server:
cd .\src\ uvicorn app.main:app --reload
-
Test the API: Open your browser and go to
http://127.0.0.1:8000/docs
to view the automatically generated API documentation.
-
Navigate to Frontend:
cd Magic8-Ball/frontend
-
Create a virtual environment and install dependencies:
npm i
-
Run the Frontend:
npm run dev