diff --git a/.gitignore b/.gitignore index 8931b80..64a7bc3 100644 --- a/.gitignore +++ b/.gitignore @@ -22,6 +22,7 @@ var/ *.egg-info/ .installed.cfg *.egg +.github/ # Virtual environments venv/ diff --git a/POSTMAN_GUIDE.md b/POSTMAN_GUIDE.md new file mode 100644 index 0000000..cebb151 --- /dev/null +++ b/POSTMAN_GUIDE.md @@ -0,0 +1,178 @@ +# 📮 Collection Postman - AI Lab API + +## 📋 Overview + +Cette collection Postman complète contient tous les endpoints de l'API AI Lab avec des exemples prêts à utiliser pour tester chaque pipeline NLP. + +## 📁 Fichiers inclus + +- **`AI_Lab_API.postman_collection.json`** - Collection principale avec tous les endpoints +- **`AI_Lab_API.postman_environment.json`** - Environnement avec variables configurables +- **`POSTMAN_GUIDE.md`** - Ce guide d'utilisation + +## 🚀 Installation et Configuration + +### 1. Importer dans Postman + +1. Ouvrez Postman +2. Cliquez sur **Import** (bouton en haut à gauche) +3. Sélectionnez **Upload Files** +4. Importez les deux fichiers : + - `AI_Lab_API.postman_collection.json` + - `AI_Lab_API.postman_environment.json` + +### 2. Configurer l'environnement + +1. Cliquez sur l'icône **Settings** (⚙️) en haut à droite +2. Sélectionnez **"AI Lab API Environment"** +3. Modifiez `base_url` si nécessaire (par défaut : `http://localhost:8000`) + +### 3. Démarrer l'API + +Avant d'utiliser Postman, assurez-vous que l'API est démarrée : + +```bash +# Dans le dossier du projet +python -m src.main --mode api +# ou +poetry run python src/main.py --mode api --host 0.0.0.0 --port 8000 +``` + +## 📊 Structure de la Collection + +### 🏠 Core Endpoints + +- **Root** - Informations générales de l'API +- **Health Check** - Statut de l'API et pipelines chargés + +### 💭 Sentiment Analysis + +- **Analyze Sentiment - Positive** - Test avec texte positif +- **Analyze Sentiment - Negative** - Test avec texte négatif +- **Analyze Sentiment - Custom Model** - Test avec modèle personnalisé +- **Batch Sentiment Analysis** - Traitement par lot + +### 🏷️ Named Entity Recognition + +- **Extract Entities - People & Organizations** - Entités personnes/organisations +- **Extract Entities - Geographic** - Entités géographiques +- **Batch NER Processing** - Traitement par lot + +### ❓ Question Answering + +- **Simple Q&A** - Questions simples +- **Technical Q&A** - Questions techniques + +### 🎭 Fill Mask + +- **Fill Simple Mask** - Masques simples +- **Fill Technical Mask** - Masques techniques +- **Batch Fill Mask** - Traitement par lot + +### 🛡️ Content Moderation + +- **Check Safe Content** - Contenu sûr +- **Check Potentially Toxic Content** - Contenu potentiellement toxique +- **Batch Content Moderation** - Traitement par lot + +### ✍️ Text Generation + +- **Generate Creative Text** - Génération créative +- **Generate Technical Text** - Génération technique +- **Batch Text Generation** - Traitement par lot + +### 🧪 Testing & Examples + +- **Complete Pipeline Test** - Test complet +- **Error Handling Test - Empty Text** - Gestion d'erreurs (texte vide) +- **Error Handling Test - Invalid Model** - Gestion d'erreurs (modèle invalide) + +## 🔧 Utilisation Avancée + +### Variables d'environnement disponibles + +| Variable | Description | Valeur par défaut | +| ----------------- | ------------------------- | ----------------------- | +| `base_url` | URL de base de l'API | `http://localhost:8000` | +| `api_version` | Version de l'API | `1.0.0` | +| `timeout` | Timeout des requêtes (ms) | `30000` | +| `default_*_model` | Modèles par défaut | Voir environnement | + +### Personnalisation des modèles + +Vous pouvez tester différents modèles en modifiant le champ `model_name` dans le body des requêtes : + +```json +{ + "text": "Your text here", + "model_name": "cardiffnlp/twitter-roberta-base-sentiment-latest" +} +``` + +### Tests automatiques + +Chaque requête inclut des tests automatiques : + +- ✅ Temps de réponse < 30 secondes +- ✅ Header Content-Type présent +- ✅ Logs automatiques dans la console + +## 📈 Exemples d'utilisation + +### 1. Test rapide de l'API + +1. Exécutez **"Health Check"** pour vérifier que l'API fonctionne +2. Testez **"Analyze Sentiment - Positive"** pour un premier test + +### 2. Test complet d'un pipeline + +1. Commencez par un test simple (ex: sentiment positif) +2. Testez avec un modèle personnalisé +3. Testez le traitement par lot +4. Testez la gestion d'erreurs + +### 3. Benchmark de performance + +1. Utilisez **"Batch Text Generation"** avec plusieurs prompts +2. Surveillez les temps de réponse dans l'onglet Tests +3. Ajustez le timeout si nécessaire + +## 🐛 Dépannage + +### API non accessible + +- Vérifiez que l'API est démarrée sur le bon port +- Modifiez `base_url` dans l'environnement si nécessaire + +### Erreurs 422 (Validation Error) + +- Vérifiez le format JSON du body +- Assurez-vous que les champs requis sont présents + +### Erreurs 503 (Service Unavailable) + +- Pipeline non chargé - vérifiez les logs de l'API +- Redémarrez l'API si nécessaire + +### Timeouts + +- Augmentez la valeur `timeout` dans l'environnement +- Certains modèles peuvent être lents au premier chargement + +## 🎯 Bonnes Pratiques + +1. **Démarrez toujours par Health Check** pour vérifier l'état de l'API +2. **Utilisez l'environnement** pour centraliser la configuration +3. **Consultez les logs** dans la console Postman pour déboguer +4. **Testez progressivement** : simple → personnalisé → batch → erreurs +5. **Documentez vos tests** en ajoutant des descriptions aux requêtes + +## 🔗 Liens Utiles + +- **Documentation Swagger** : http://localhost:8000/docs (quand l'API est active) +- **Documentation ReDoc** : http://localhost:8000/redoc +- **Schéma OpenAPI** : http://localhost:8000/openapi.json + +--- + +**Happy Testing! 🚀** diff --git a/README.md b/README.md index 33292e6..55b2e50 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,54 @@ # 🧠 AI Lab – Transformers CLI Playground -> A **pedagogical and technical project** designed for AI practitioners and students to experiment with Hugging Face Transformers through an **interactive Command‑Line Interface (CLI)**. -> This playground provides ready‑to‑use NLP pipelines (Sentiment Analysis, Named Entity Recognition, Text Generation, Fill‑Mask, Moderation, etc.) in a modular, extensible, and educational codebase. +> A **pedagogical and technical project** designed for AI practitioners and students to explore **Hugging Face Transformers** through an **interactive Command-Line Interface (CLI)** or a **REST API**. +> This playground provides ready-to-use NLP pipelines — including **Sentiment Analysis**, **Named Entity Recognition**, **Text Generation**, **Fill-Mask**, **Question Answering (QA)**, **Moderation**, and more — in a modular, extensible, and educational codebase. + +--- + +

+ Python + Poetry + Transformers + License +

+ +--- + +## 📑 Table of Contents + +- [📚 Overview](#-overview) +- [🗂️ Project Structure](#️-project-structure) +- [⚙️ Installation](#️-installation) + - [🧾 Option 1 – Poetry (Recommended)](#-option-1--poetry-recommended) + - [📦 Option 2 – Pip + Requirements](#-option-2--pip--requirements) +- [▶️ Usage](#️-usage) + - [🖥️ CLI Mode](#️-cli-mode) + - [🌐 API Mode](#-api-mode) +- [📡 API Endpoints](#-api-endpoints) +- [🖥️ CLI Examples](#️-cli-examples) +- [🧠 Architecture Overview](#-architecture-overview) +- [⚙️ Configuration](#️-configuration) +- [🧩 Extending the Playground](#-extending-the-playground) +- [🧰 Troubleshooting](#-troubleshooting) +- [🧭 Development Guidelines](#-development-guidelines) +- [🧱 Roadmap](#-roadmap) +- [📜 License](#-license) --- ## 📚 Overview -The **AI Lab – Transformers CLI Playground** allows you to explore multiple natural language processing tasks directly from the terminal. -Each task (e.g., sentiment, NER, text generation) is implemented as a **Command Module**, which interacts with a **Pipeline Module** built on top of the `transformers` library. +The **AI Lab – Transformers CLI Playground** enables users to explore **multiple NLP tasks directly from the terminal or via HTTP APIs**. +Each task (sentiment, NER, text generation, etc.) is implemented as a **Command Module** that communicates with a **Pipeline Module** powered by Hugging Face’s `transformers` library. -The lab is intentionally structured to demonstrate **clean software design for ML codebases** — with strict separation between configuration, pipelines, CLI logic, and display formatting. +The project demonstrates **clean ML code architecture** with strict separation between: + +- Configuration +- Pipelines +- CLI logic +- Display formatting + +It’s a great educational resource for learning **how to structure ML applications** professionally. --- @@ -18,77 +56,74 @@ The lab is intentionally structured to demonstrate **clean software design for M ```text src/ -├── __init__.py ├── main.py # CLI entry point │ ├── cli/ -│ ├── __init__.py -│ ├── base.py # CLICommand base class & interactive shell handler -│ └── display.py # Console formatting utilities (tables, colors, results) +│ ├── base.py # CLICommand base class & interactive shell +│ └── display.py # Console formatting utilities (colors, tables, results) │ ├── commands/ # User-facing commands wrapping pipeline logic -│ ├── __init__.py │ ├── sentiment.py # Sentiment analysis command -│ ├── fillmask.py # Masked token prediction command -│ ├── textgen.py # Text generation command -│ ├── ner.py # Named Entity Recognition command -│ └── moderation.py # Toxicity / content moderation command +│ ├── fillmask.py # Masked token prediction +│ ├── textgen.py # Text generation +│ ├── ner.py # Named Entity Recognition +│ ├── qa.py # Question Answering (extractive) +│ └── moderation.py # Content moderation / toxicity detection │ -├── pipelines/ # Machine learning logic (Hugging Face Transformers) -│ ├── __init__.py +├── pipelines/ # ML logic based on Hugging Face pipelines │ ├── template.py # Blueprint for creating new pipelines │ ├── sentiment.py │ ├── fillmask.py │ ├── textgen.py │ ├── ner.py +│ ├── qa.py │ └── moderation.py │ +├── api/ +│ ├── app.py # FastAPI app and endpoints +│ ├── models.py # Pydantic schemas +│ └── config.py # API configuration +│ └── config/ - ├── __init__.py - └── settings.py # Global configuration (default models, parameters) + └── settings.py # Global configuration (models, params) ``` --- ## ⚙️ Installation -### 🧾 Option 1 – Using Poetry (Recommended) +### 🧾 Option 1 – Poetry (Recommended) -> Poetry is used as the main dependency manager. +> Poetry is the main dependency manager for this project. ```bash -# 1. Create and activate a new virtual environment poetry shell - -# 2. Install dependencies poetry install ``` -This will automatically install all dependencies declared in `pyproject.toml`, including **transformers** and **torch**. +This installs all dependencies defined in `pyproject.toml` (including `transformers`, `torch`, and `fastapi`). -To run the CLI inside the Poetry environment: +Run the app: ```bash -poetry run python src/main.py +# CLI mode +poetry run python src/main.py --mode cli + +# API mode +poetry run python src/main.py --mode api ``` --- -### 📦 Option 2 – Using pip and requirements.txt +### 📦 Option 2 – Pip + requirements.txt -If you prefer using `requirements.txt` manually: +If you prefer manual dependency management: ```bash -# 1. Create a virtual environment python -m venv .venv +source .venv/bin/activate # Linux/macOS +.venv\Scripts\Activate.ps1 # Windows -# 2. Activate it -# Linux/macOS -source .venv/bin/activate -# Windows PowerShell -.venv\Scripts\Activate.ps1 - -# 3. Install dependencies pip install -r requirements.txt ``` @@ -96,15 +131,15 @@ pip install -r requirements.txt ## ▶️ Usage -Once installed, launch the CLI with: +### 🖥️ CLI Mode + +Run the interactive CLI: ```bash -python -m src.main -# or, if using Poetry -poetry run python src/main.py +python -m src.main --mode cli ``` -You’ll see an interactive menu listing the available commands: +Interactive menu: ``` Welcome to AI Lab - Transformers CLI Playground @@ -113,36 +148,89 @@ Available commands: • fillmask – Predict masked words in a sentence • textgen – Generate text from a prompt • ner – Extract named entities from text + • qa – Answer questions from a context • moderation – Detect toxic or unsafe content ``` -### Example Sessions +--- -#### 🔹 Sentiment Analysis +### 🌐 API Mode + +Run FastAPI server: + +```bash +python -m src.main --mode api +# Custom config +python -m src.main --mode api --host 0.0.0.0 --port 8000 --reload +``` + +API Docs: + +- **Swagger** → http://localhost:8000/docs +- **ReDoc** → http://localhost:8000/redoc +- **OpenAPI** → http://localhost:8000/openapi.json + +--- + +## 📡 API Endpoints + +### Core Endpoints + +| Method | Endpoint | Description | +| ------ | --------- | ------------------------- | +| `GET` | `/` | Health check and API info | +| `GET` | `/health` | Detailed health status | + +### Individual Processing + +| Method | Endpoint | Description | +| ------ | ------------- | ---------------------- | +| `POST` | `/sentiment` | Analyze text sentiment | +| `POST` | `/fillmask` | Predict masked words | +| `POST` | `/textgen` | Generate text | +| `POST` | `/ner` | Extract named entities | +| `POST` | `/qa` | Question answering | +| `POST` | `/moderation` | Content moderation | + +### Batch Processing + +| Method | Endpoint | Description | +| ------ | ------------------- | -------------------------- | +| `POST` | `/sentiment/batch` | Process multiple texts | +| `POST` | `/fillmask/batch` | Fill multiple masked texts | +| `POST` | `/textgen/batch` | Generate from prompts | +| `POST` | `/ner/batch` | Extract entities in batch | +| `POST` | `/qa/batch` | Answer questions in batch | +| `POST` | `/moderation/batch` | Moderate multiple texts | + +--- + +## 🖥️ CLI Examples + +### 🔹 Sentiment Analysis ```text 💬 Enter text: I absolutely love this project! → Sentiment: POSITIVE (score: 0.998) ``` -#### 🔹 Fill‑Mask +### 🔹 Fill-Mask ```text 💬 Enter text: The capital of France is [MASK]. → Predictions: 1) Paris score: 0.87 2) Lyon score: 0.04 - 3) London score: 0.02 ``` -#### 🔹 Text Generation +### 🔹 Text Generation ```text 💬 Prompt: Once upon a time → Output: Once upon a time there was a young AI learning to code... ``` -#### 🔹 NER (Named Entity Recognition) +### 🔹 NER ```text 💬 Enter text: Elon Musk founded SpaceX in California. @@ -152,7 +240,15 @@ Available commands: - California (LOC) ``` -#### 🔹 Moderation +### 🔹 QA (Question Answering) + +```text +💬 Enter question: What is the capital of France? +💬 Enter context: France is a country in Europe. Its capital is Paris. +→ Answer: The capital of France is Paris. +``` + +### 🔹 Moderation ```text 💬 Enter text: I hate everything! @@ -163,50 +259,34 @@ Available commands: ## 🧠 Architecture Overview -The internal structure follows a clean **Command ↔ Pipeline ↔ Display** pattern: +Both CLI and API share the **same pipeline layer**, ensuring code reusability and consistency. + +### CLI Architecture ```text - ┌──────────────────────┐ - │ InteractiveCLI │ - │ (src/cli/base.py) │ - └──────────┬───────────┘ - │ - ▼ - ┌─────────────────┐ - │ Command Layer │ ← e.g. sentiment.py - │ (user commands) │ - └───────┬─────────┘ - │ - ▼ - ┌─────────────────┐ - │ Pipeline Layer │ ← e.g. pipelines/sentiment.py - │ (ML logic) │ - └───────┬─────────┘ - │ - ▼ - ┌─────────────────┐ - │ Display Layer │ ← cli/display.py - │ (format output) │ - └─────────────────┘ +InteractiveCLI → Command Layer → Pipeline Layer → Display Layer ``` -### Key Concepts +### API Architecture -| Layer | Description | -| ------------ | -------------------------------------------------------------------------- | -| **CLI** | Manages user input/output, help menus, and navigation between commands. | -| **Command** | Encapsulates a single user-facing operation (e.g., run sentiment). | -| **Pipeline** | Wraps Hugging Face’s `transformers.pipeline()` to perform inference. | -| **Display** | Handles clean console rendering (colored output, tables, JSON formatting). | -| **Config** | Centralizes model names, limits, and global constants. | +```text +FastAPI App → Pydantic Models → Pipeline Layer → JSON Response +``` + +| Layer | Description | +| ------------ | ---------------------------------------------- | +| **CLI** | Manages user input/output and navigation. | +| **API** | Exposes endpoints with automatic OpenAPI docs. | +| **Command** | Encapsulates user-facing operations. | +| **Pipeline** | Wraps Hugging Face’s pipelines. | +| **Models** | Validates requests/responses. | +| **Display** | Formats console output. | --- ## ⚙️ Configuration -All configuration is centralized in `src/config/settings.py`. - -Example: +All configuration is centralized in `src/config/settings.py`: ```python class Config: @@ -215,88 +295,75 @@ class Config: "fillmask": "bert-base-uncased", "textgen": "gpt2", "ner": "dslim/bert-base-NER", - "moderation":"unitary/toxic-bert" + "qa": "distilbert-base-cased-distilled-squad", + "moderation":"unitary/toxic-bert", } MAX_LENGTH = 512 BATCH_SIZE = 8 ``` -You can easily modify model names to experiment with different checkpoints. - --- ## 🧩 Extending the Playground -To create a new experiment (e.g., keyword extraction): +To add a new NLP experiment (e.g., keyword extraction): -1. **Duplicate** `src/pipelines/template.py` → `src/pipelines/keywords.py` - Implement the `run()` or `analyze()` logic using a new Hugging Face pipeline. +1. Duplicate `src/pipelines/template.py` → `src/pipelines/keywords.py` +2. Create a command: `src/commands/keywords.py` +3. Register it in `src/main.py` +4. Add Pydantic models and API endpoint +5. Update `Config.DEFAULT_MODELS` -2. **Create a Command** in `src/commands/keywords.py` to interact with users. - -3. **Register the command** inside `src/main.py`: - -```python -from src.commands.keywords import KeywordsCommand -cli.register_command(KeywordsCommand()) -``` - -4. Optionally, add a model name in `Config.DEFAULT_MODELS`. - ---- - -## 🧪 Testing - -You can use `pytest` for lightweight validation: - -```bash -pip install pytest -pytest -q -``` - -Recommended structure: - -``` -tests/ -├── test_sentiment.py -├── test_textgen.py -└── ... -``` +Both CLI and API will automatically share this logic. --- ## 🧰 Troubleshooting -| Issue | Cause / Solution | -| ---------------------------- | -------------------------------------------- | -| **`transformers` not found** | Check virtual environment activation. | -| **Torch fails to install** | Install CPU-only version from PyTorch index. | -| **Models download slowly** | Hugging Face caches them after first run. | -| **Unicode / accents broken** | Ensure terminal encoding is UTF‑8. | +| Issue | Solution | +| ------------------------ | ----------------------- | +| `transformers` not found | Activate your venv. | +| Torch install fails | Use CPU-only wheel. | +| Models download slowly | Cached after first use. | +| Encoding issues | Ensure UTF-8 terminal. | + +### API Issues + +| Issue | Solution | +| -------------------- | --------------------------------------- | +| `FastAPI` missing | `pip install fastapi uvicorn[standard]` | +| Port in use | Change with `--port 8001` | +| CORS error | Edit `allow_origins` in `api/config.py` | +| Validation error 422 | Check request body | +| 500 error | Verify model loading | --- ## 🧭 Development Guidelines -- Keep **Command** classes lightweight — no ML logic inside them. -- Reuse the **Pipeline Template** for new experiments. -- Format outputs consistently via the `DisplayFormatter`. -- Document all new models or commands in `README.md` and `settings.py`. +- Keep command classes lightweight (no ML inside) +- Use the pipeline template for new tasks +- Format all outputs via `DisplayFormatter` +- Document new commands and models --- ## 🧱 Roadmap -- [ ] Add non-interactive CLI flags (`--text`, `--task`) -- [ ] Add multilingual model options -- [ ] Add automatic test coverage -- [ ] Add logging and profiling utilities -- [ ] Add export to JSON/CSV results +- [ ] Non-interactive CLI flags (`--text`, `--task`) +- [ ] Multilingual models +- [ ] Test coverage +- [ ] Logging & profiling +- [ ] Export to JSON/CSV --- ## 📜 License -This project is licensed under the [MIT License](./LICENSE) — feel free to use it, modify it, and share it! +Licensed under the [MIT License](./LICENSE). +You are free to use, modify, and distribute this project. --- + +✨ **End of Documentation** +_The AI Lab – Transformers CLI Playground: built for learning, experimenting, and sharing NLP excellence._ diff --git a/demo_api.sh b/demo_api.sh new file mode 100755 index 0000000..ba4b817 --- /dev/null +++ b/demo_api.sh @@ -0,0 +1,68 @@ +#!/bin/bash + +echo "🚀 AI Lab API Demo" +echo "==================" +echo "" + +# Configuration +API_BASE="http://127.0.0.1:8000" + +echo "📊 Health Check:" +curl -s -X GET "$API_BASE/health" | python3 -m json.tool +echo "" + +echo "😊 Sentiment Analysis:" +curl -s -X POST "$API_BASE/sentiment" \ + -H "Content-Type: application/json" \ + -d '{"text": "This API is absolutely amazing! I love it!"}' | python3 -m json.tool +echo "" + +echo "🔍 Named Entity Recognition:" +curl -s -X POST "$API_BASE/ner" \ + -H "Content-Type: application/json" \ + -d '{"text": "Apple Inc. was founded by Steve Jobs in Cupertino, California."}' | python3 -m json.tool +echo "" + +echo "❓ Question Answering:" +curl -s -X POST "$API_BASE/qa" \ + -H "Content-Type: application/json" \ + -d '{ + "question": "Who founded Apple?", + "context": "Apple Inc. was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976." + }' | python3 -m json.tool +echo "" + +echo "� Fill Mask:" +curl -s -X POST "$API_BASE/fillmask" \ + -H "Content-Type: application/json" \ + -d '{"text": "The capital of France is [MASK]."}' | python3 -m json.tool +echo "" + +echo "🛡️ Content Moderation:" +curl -s -X POST "$API_BASE/moderation" \ + -H "Content-Type: application/json" \ + -d '{"text": "This is a completely normal and safe text."}' | python3 -m json.tool +echo "" + +echo "✍️ Text Generation:" +curl -s -X POST "$API_BASE/textgen" \ + -H "Content-Type: application/json" \ + -d '{"text": "Once upon a time, in a distant galaxy"}' | python3 -m json.tool +echo "" + +echo "📦 Batch Sentiment Analysis:" +curl -s -X POST "$API_BASE/sentiment/batch" \ + -H "Content-Type: application/json" \ + -d '{ + "texts": [ + "I love this!", + "This is terrible.", + "Neutral statement here." + ] + }' | python3 -m json.tool +echo "" + +echo "�🏥 Final Health Check:" +curl -s -X GET "$API_BASE/health" | python3 -m json.tool +echo "" +echo "✅ Demo completed! Check the API documentation at: $API_BASE/docs" \ No newline at end of file diff --git a/poetry.lock b/poetry.lock index b7304dc..5014372 100644 --- a/poetry.lock +++ b/poetry.lock @@ -2,36 +2,65 @@ [[package]] name = "accelerate" -version = "1.10.1" +version = "0.20.3" description = "Accelerate" optional = false -python-versions = ">=3.9.0" +python-versions = ">=3.7.0" groups = ["main"] files = [ - {file = "accelerate-1.10.1-py3-none-any.whl", hash = "sha256:3621cff60b9a27ce798857ece05e2b9f56fcc71631cfb31ccf71f0359c311f11"}, - {file = "accelerate-1.10.1.tar.gz", hash = "sha256:3dea89e433420e4bfac0369cae7e36dcd6a56adfcfd38cdda145c6225eab5df8"}, + {file = "accelerate-0.20.3-py3-none-any.whl", hash = "sha256:147183e7a2215f7bd45a7af3b986a963daa8a61fa58b0912b9473049e011ad15"}, + {file = "accelerate-0.20.3.tar.gz", hash = "sha256:79a896978c20dac270083d42bf033f4c9a80dcdd6b946f1ca92d8d6d0f0f5ba9"}, ] [package.dependencies] -huggingface_hub = ">=0.21.0" -numpy = ">=1.17,<3.0.0" +numpy = ">=1.17" packaging = ">=20.0" psutil = "*" pyyaml = "*" -safetensors = ">=0.4.3" -torch = ">=2.0.0" +torch = ">=1.6.0" [package.extras] -deepspeed = ["deepspeed"] -dev = ["bitsandbytes", "black (>=23.1,<24.0)", "datasets", "diffusers", "evaluate", "hf-doc-builder (>=0.3.0)", "parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist", "rich", "ruff (>=0.11.2,<0.12.0)", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"] -quality = ["black (>=23.1,<24.0)", "hf-doc-builder (>=0.3.0)", "ruff (>=0.11.2,<0.12.0)"] +dev = ["black (>=23.1,<24.0)", "datasets", "deepspeed", "evaluate", "hf-doc-builder (>=0.3.0)", "parameterized", "pytest", "pytest-subtests", "pytest-xdist", "rich", "ruff (>=0.0.241)", "scikit-learn", "scipy", "tqdm", "transformers", "urllib3 (<2.0.0)"] +quality = ["black (>=23.1,<24.0)", "hf-doc-builder (>=0.3.0)", "ruff (>=0.0.241)", "urllib3 (<2.0.0)"] rich = ["rich"] sagemaker = ["sagemaker"] -test-dev = ["bitsandbytes", "datasets", "diffusers", "evaluate", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"] -test-fp8 = ["torchao"] -test-prod = ["parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist"] -test-trackers = ["comet-ml", "dvclive", "matplotlib", "mlflow", "swanlab", "tensorboard", "trackio", "wandb"] -testing = ["bitsandbytes", "datasets", "diffusers", "evaluate", "parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"] +test-dev = ["datasets", "deepspeed", "evaluate", "scikit-learn", "scipy", "tqdm", "transformers"] +test-prod = ["parameterized", "pytest", "pytest-subtests", "pytest-xdist"] +test-trackers = ["comet-ml", "tensorboard", "wandb"] +testing = ["datasets", "deepspeed", "evaluate", "parameterized", "pytest", "pytest-subtests", "pytest-xdist", "scikit-learn", "scipy", "tqdm", "transformers"] + +[[package]] +name = "annotated-types" +version = "0.7.0" +description = "Reusable constraint types to use with typing.Annotated" +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"}, + {file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"}, +] + +[[package]] +name = "anyio" +version = "3.7.1" +description = "High level compatibility layer for multiple asynchronous event loop implementations" +optional = false +python-versions = ">=3.7" +groups = ["main"] +files = [ + {file = "anyio-3.7.1-py3-none-any.whl", hash = "sha256:91dee416e570e92c64041bd18b900d1d6fa78dff7048769ce5ac5ddad004fbb5"}, + {file = "anyio-3.7.1.tar.gz", hash = "sha256:44a3c9aba0f5defa43261a8b3efb97891f2bd7d804e0e1f56419befa1adfc780"}, +] + +[package.dependencies] +idna = ">=2.8" +sniffio = ">=1.1" + +[package.extras] +doc = ["Sphinx", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme (>=1.2.2)", "sphinxcontrib-jquery"] +test = ["anyio[trio]", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4) ; python_version < \"3.8\"", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (>=0.17) ; python_version < \"3.12\" and platform_python_implementation == \"CPython\" and platform_system != \"Windows\""] +trio = ["trio (<0.22)"] [[package]] name = "certifi" @@ -134,6 +163,21 @@ files = [ {file = "charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14"}, ] +[[package]] +name = "click" +version = "8.3.0" +description = "Composable command line interface toolkit" +optional = false +python-versions = ">=3.10" +groups = ["main"] +files = [ + {file = "click-8.3.0-py3-none-any.whl", hash = "sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc"}, + {file = "click-8.3.0.tar.gz", hash = "sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + [[package]] name = "colorama" version = "0.4.6" @@ -141,12 +185,33 @@ description = "Cross-platform colored terminal text." optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" groups = ["main"] -markers = "platform_system == \"Windows\"" +markers = "platform_system == \"Windows\" or sys_platform == \"win32\"" files = [ {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, ] +[[package]] +name = "fastapi" +version = "0.104.1" +description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "fastapi-0.104.1-py3-none-any.whl", hash = "sha256:752dc31160cdbd0436bb93bad51560b57e525cbb1d4bbf6f4904ceee75548241"}, + {file = "fastapi-0.104.1.tar.gz", hash = "sha256:e5e4540a7c5e1dcfbbcf5b903c234feddcdcd881f191977a1c5dfd917487e7ae"}, +] + +[package.dependencies] +anyio = ">=3.7.1,<4.0.0" +pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0" +starlette = ">=0.27.0,<0.28.0" +typing-extensions = ">=4.8.0" + +[package.extras] +all = ["email-validator (>=2.0.0)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=2.11.2)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.5)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"] + [[package]] name = "filelock" version = "3.19.1" @@ -199,6 +264,18 @@ test-downstream = ["aiobotocore (>=2.5.4,<3.0.0)", "dask[dataframe,test]", "moto test-full = ["adlfs", "aiohttp (!=4.0.0a0,!=4.0.0a1)", "cloudpickle", "dask", "distributed", "dropbox", "dropboxdrivefs", "fastparquet", "fusepy", "gcsfs", "jinja2", "kerchunk", "libarchive-c", "lz4", "notebook", "numpy", "ocifs", "pandas", "panel", "paramiko", "pyarrow", "pyarrow (>=1)", "pyftpdlib", "pygit2", "pytest", "pytest-asyncio (!=0.22.0)", "pytest-benchmark", "pytest-cov", "pytest-mock", "pytest-recording", "pytest-rerunfailures", "python-snappy", "requests", "smbprotocol", "tqdm", "urllib3", "zarr", "zstandard ; python_version < \"3.14\""] tqdm = ["tqdm"] +[[package]] +name = "h11" +version = "0.16.0" +description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1" +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86"}, + {file = "h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1"}, +] + [[package]] name = "hf-xet" version = "1.1.10" @@ -221,6 +298,59 @@ files = [ [package.extras] tests = ["pytest"] +[[package]] +name = "httptools" +version = "0.7.1" +description = "A collection of framework independent HTTP protocol utils." +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "httptools-0.7.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:11d01b0ff1fe02c4c32d60af61a4d613b74fad069e47e06e9067758c01e9ac78"}, + {file = "httptools-0.7.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:84d86c1e5afdc479a6fdabf570be0d3eb791df0ae727e8dbc0259ed1249998d4"}, + {file = "httptools-0.7.1-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:c8c751014e13d88d2be5f5f14fc8b89612fcfa92a9cc480f2bc1598357a23a05"}, + {file = "httptools-0.7.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:654968cb6b6c77e37b832a9be3d3ecabb243bbe7a0b8f65fbc5b6b04c8fcabed"}, + {file = "httptools-0.7.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:b580968316348b474b020edf3988eecd5d6eec4634ee6561e72ae3a2a0e00a8a"}, + {file = "httptools-0.7.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:d496e2f5245319da9d764296e86c5bb6fcf0cf7a8806d3d000717a889c8c0b7b"}, + {file = "httptools-0.7.1-cp310-cp310-win_amd64.whl", hash = "sha256:cbf8317bfccf0fed3b5680c559d3459cccf1abe9039bfa159e62e391c7270568"}, + {file = "httptools-0.7.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:474d3b7ab469fefcca3697a10d11a32ee2b9573250206ba1e50d5980910da657"}, + {file = "httptools-0.7.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a3c3b7366bb6c7b96bd72d0dbe7f7d5eead261361f013be5f6d9590465ea1c70"}, + {file = "httptools-0.7.1-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:379b479408b8747f47f3b253326183d7c009a3936518cdb70db58cffd369d9df"}, + {file = "httptools-0.7.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cad6b591a682dcc6cf1397c3900527f9affef1e55a06c4547264796bbd17cf5e"}, + {file = "httptools-0.7.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:eb844698d11433d2139bbeeb56499102143beb582bd6c194e3ba69c22f25c274"}, + {file = "httptools-0.7.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f65744d7a8bdb4bda5e1fa23e4ba16832860606fcc09d674d56e425e991539ec"}, + {file = "httptools-0.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:135fbe974b3718eada677229312e97f3b31f8a9c8ffa3ae6f565bf808d5b6bcb"}, + {file = "httptools-0.7.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:38e0c83a2ea9746ebbd643bdfb521b9aa4a91703e2cd705c20443405d2fd16a5"}, + {file = "httptools-0.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f25bbaf1235e27704f1a7b86cd3304eabc04f569c828101d94a0e605ef7205a5"}, + {file = "httptools-0.7.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c15f37ef679ab9ecc06bfc4e6e8628c32a8e4b305459de7cf6785acd57e4d03"}, + {file = "httptools-0.7.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7fe6e96090df46b36ccfaf746f03034e5ab723162bc51b0a4cf58305324036f2"}, + {file = "httptools-0.7.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f72fdbae2dbc6e68b8239defb48e6a5937b12218e6ffc2c7846cc37befa84362"}, + {file = "httptools-0.7.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e99c7b90a29fd82fea9ef57943d501a16f3404d7b9ee81799d41639bdaae412c"}, + {file = "httptools-0.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3e14f530fefa7499334a79b0cf7e7cd2992870eb893526fb097d51b4f2d0f321"}, + {file = "httptools-0.7.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6babce6cfa2a99545c60bfef8bee0cc0545413cb0018f617c8059a30ad985de3"}, + {file = "httptools-0.7.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:601b7628de7504077dd3dcb3791c6b8694bbd967148a6d1f01806509254fb1ca"}, + {file = "httptools-0.7.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:04c6c0e6c5fb0739c5b8a9eb046d298650a0ff38cf42537fc372b28dc7e4472c"}, + {file = "httptools-0.7.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:69d4f9705c405ae3ee83d6a12283dc9feba8cc6aaec671b412917e644ab4fa66"}, + {file = "httptools-0.7.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:44c8f4347d4b31269c8a9205d8a5ee2df5322b09bbbd30f8f862185bb6b05346"}, + {file = "httptools-0.7.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:465275d76db4d554918aba40bf1cbebe324670f3dfc979eaffaa5d108e2ed650"}, + {file = "httptools-0.7.1-cp313-cp313-win_amd64.whl", hash = "sha256:322d00c2068d125bd570f7bf78b2d367dad02b919d8581d7476d8b75b294e3e6"}, + {file = "httptools-0.7.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:c08fe65728b8d70b6923ce31e3956f859d5e1e8548e6f22ec520a962c6757270"}, + {file = "httptools-0.7.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:7aea2e3c3953521c3c51106ee11487a910d45586e351202474d45472db7d72d3"}, + {file = "httptools-0.7.1-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0e68b8582f4ea9166be62926077a3334064d422cf08ab87d8b74664f8e9058e1"}, + {file = "httptools-0.7.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:df091cf961a3be783d6aebae963cc9b71e00d57fa6f149025075217bc6a55a7b"}, + {file = "httptools-0.7.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f084813239e1eb403ddacd06a30de3d3e09a9b76e7894dcda2b22f8a726e9c60"}, + {file = "httptools-0.7.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:7347714368fb2b335e9063bc2b96f2f87a9ceffcd9758ac295f8bbcd3ffbc0ca"}, + {file = "httptools-0.7.1-cp314-cp314-win_amd64.whl", hash = "sha256:cfabda2a5bb85aa2a904ce06d974a3f30fb36cc63d7feaddec05d2050acede96"}, + {file = "httptools-0.7.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:ac50afa68945df63ec7a2707c506bd02239272288add34539a2ef527254626a4"}, + {file = "httptools-0.7.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de987bb4e7ac95b99b805b99e0aae0ad51ae61df4263459d36e07cf4052d8b3a"}, + {file = "httptools-0.7.1-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:d169162803a24425eb5e4d51d79cbf429fd7a491b9e570a55f495ea55b26f0bf"}, + {file = "httptools-0.7.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:49794f9250188a57fa73c706b46cb21a313edb00d337ca4ce1a011fe3c760b28"}, + {file = "httptools-0.7.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:aeefa0648362bb97a7d6b5ff770bfb774930a327d7f65f8208394856862de517"}, + {file = "httptools-0.7.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0d92b10dbf0b3da4823cde6a96d18e6ae358a9daa741c71448975f6a2c339cad"}, + {file = "httptools-0.7.1-cp39-cp39-win_amd64.whl", hash = "sha256:5ddbd045cfcb073db2449563dd479057f2c2b681ebc232380e63ef15edc9c023"}, + {file = "httptools-0.7.1.tar.gz", hash = "sha256:abd72556974f8e7c74a259655924a717a2365b236c882c3f6f8a45fe94703ac9"}, +] + [[package]] name = "huggingface-hub" version = "0.35.1" @@ -293,18 +423,6 @@ MarkupSafe = ">=2.0" [package.extras] i18n = ["Babel (>=2.7)"] -[[package]] -name = "joblib" -version = "1.5.2" -description = "Lightweight pipelining with Python functions" -optional = false -python-versions = ">=3.9" -groups = ["main"] -files = [ - {file = "joblib-1.5.2-py3-none-any.whl", hash = "sha256:4e1f0bdbb987e6d843c70cf43714cb276623def372df3c22fe5266b2670bc241"}, - {file = "joblib-1.5.2.tar.gz", hash = "sha256:3faa5c39054b2f03ca547da9b2f52fde67c06240c31853f306aea97f13647b55"}, -] - [[package]] name = "markupsafe" version = "3.0.2" @@ -417,86 +535,48 @@ test-extras = ["pytest-mpl", "pytest-randomly"] [[package]] name = "numpy" -version = "2.3.3" +version = "1.26.4" description = "Fundamental package for array computing in Python" optional = false -python-versions = ">=3.11" +python-versions = ">=3.9" groups = ["main"] files = [ - {file = "numpy-2.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0ffc4f5caba7dfcbe944ed674b7eef683c7e94874046454bb79ed7ee0236f59d"}, - {file = "numpy-2.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e7e946c7170858a0295f79a60214424caac2ffdb0063d4d79cb681f9aa0aa569"}, - {file = "numpy-2.3.3-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:cd4260f64bc794c3390a63bf0728220dd1a68170c169088a1e0dfa2fde1be12f"}, - {file = "numpy-2.3.3-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:f0ddb4b96a87b6728df9362135e764eac3cfa674499943ebc44ce96c478ab125"}, - {file = "numpy-2.3.3-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:afd07d377f478344ec6ca2b8d4ca08ae8bd44706763d1efb56397de606393f48"}, - {file = "numpy-2.3.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bc92a5dedcc53857249ca51ef29f5e5f2f8c513e22cfb90faeb20343b8c6f7a6"}, - {file = "numpy-2.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7af05ed4dc19f308e1d9fc759f36f21921eb7bbfc82843eeec6b2a2863a0aefa"}, - {file = "numpy-2.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:433bf137e338677cebdd5beac0199ac84712ad9d630b74eceeb759eaa45ddf30"}, - {file = "numpy-2.3.3-cp311-cp311-win32.whl", hash = "sha256:eb63d443d7b4ffd1e873f8155260d7f58e7e4b095961b01c91062935c2491e57"}, - {file = "numpy-2.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:ec9d249840f6a565f58d8f913bccac2444235025bbb13e9a4681783572ee3caa"}, - {file = "numpy-2.3.3-cp311-cp311-win_arm64.whl", hash = "sha256:74c2a948d02f88c11a3c075d9733f1ae67d97c6bdb97f2bb542f980458b257e7"}, - {file = "numpy-2.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:cfdd09f9c84a1a934cde1eec2267f0a43a7cd44b2cca4ff95b7c0d14d144b0bf"}, - {file = "numpy-2.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cb32e3cf0f762aee47ad1ddc6672988f7f27045b0783c887190545baba73aa25"}, - {file = "numpy-2.3.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:396b254daeb0a57b1fe0ecb5e3cff6fa79a380fa97c8f7781a6d08cd429418fe"}, - {file = "numpy-2.3.3-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:067e3d7159a5d8f8a0b46ee11148fc35ca9b21f61e3c49fbd0a027450e65a33b"}, - {file = "numpy-2.3.3-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1c02d0629d25d426585fb2e45a66154081b9fa677bc92a881ff1d216bc9919a8"}, - {file = "numpy-2.3.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d9192da52b9745f7f0766531dcfa978b7763916f158bb63bdb8a1eca0068ab20"}, - {file = "numpy-2.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:cd7de500a5b66319db419dc3c345244404a164beae0d0937283b907d8152e6ea"}, - {file = "numpy-2.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:93d4962d8f82af58f0b2eb85daaf1b3ca23fe0a85d0be8f1f2b7bb46034e56d7"}, - {file = "numpy-2.3.3-cp312-cp312-win32.whl", hash = "sha256:5534ed6b92f9b7dca6c0a19d6df12d41c68b991cef051d108f6dbff3babc4ebf"}, - {file = "numpy-2.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:497d7cad08e7092dba36e3d296fe4c97708c93daf26643a1ae4b03f6294d30eb"}, - {file = "numpy-2.3.3-cp312-cp312-win_arm64.whl", hash = "sha256:ca0309a18d4dfea6fc6262a66d06c26cfe4640c3926ceec90e57791a82b6eee5"}, - {file = "numpy-2.3.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f5415fb78995644253370985342cd03572ef8620b934da27d77377a2285955bf"}, - {file = "numpy-2.3.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:d00de139a3324e26ed5b95870ce63be7ec7352171bc69a4cf1f157a48e3eb6b7"}, - {file = "numpy-2.3.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:9dc13c6a5829610cc07422bc74d3ac083bd8323f14e2827d992f9e52e22cd6a6"}, - {file = "numpy-2.3.3-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:d79715d95f1894771eb4e60fb23f065663b2298f7d22945d66877aadf33d00c7"}, - {file = "numpy-2.3.3-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:952cfd0748514ea7c3afc729a0fc639e61655ce4c55ab9acfab14bda4f402b4c"}, - {file = "numpy-2.3.3-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5b83648633d46f77039c29078751f80da65aa64d5622a3cd62aaef9d835b6c93"}, - {file = "numpy-2.3.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b001bae8cea1c7dfdb2ae2b017ed0a6f2102d7a70059df1e338e307a4c78a8ae"}, - {file = "numpy-2.3.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8e9aced64054739037d42fb84c54dd38b81ee238816c948c8f3ed134665dcd86"}, - {file = "numpy-2.3.3-cp313-cp313-win32.whl", hash = "sha256:9591e1221db3f37751e6442850429b3aabf7026d3b05542d102944ca7f00c8a8"}, - {file = "numpy-2.3.3-cp313-cp313-win_amd64.whl", hash = "sha256:f0dadeb302887f07431910f67a14d57209ed91130be0adea2f9793f1a4f817cf"}, - {file = "numpy-2.3.3-cp313-cp313-win_arm64.whl", hash = "sha256:3c7cf302ac6e0b76a64c4aecf1a09e51abd9b01fc7feee80f6c43e3ab1b1dbc5"}, - {file = "numpy-2.3.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:eda59e44957d272846bb407aad19f89dc6f58fecf3504bd144f4c5cf81a7eacc"}, - {file = "numpy-2.3.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:823d04112bc85ef5c4fda73ba24e6096c8f869931405a80aa8b0e604510a26bc"}, - {file = "numpy-2.3.3-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:40051003e03db4041aa325da2a0971ba41cf65714e65d296397cc0e32de6018b"}, - {file = "numpy-2.3.3-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:6ee9086235dd6ab7ae75aba5662f582a81ced49f0f1c6de4260a78d8f2d91a19"}, - {file = "numpy-2.3.3-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:94fcaa68757c3e2e668ddadeaa86ab05499a70725811e582b6a9858dd472fb30"}, - {file = "numpy-2.3.3-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:da1a74b90e7483d6ce5244053399a614b1d6b7bc30a60d2f570e5071f8959d3e"}, - {file = "numpy-2.3.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:2990adf06d1ecee3b3dcbb4977dfab6e9f09807598d647f04d385d29e7a3c3d3"}, - {file = "numpy-2.3.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:ed635ff692483b8e3f0fcaa8e7eb8a75ee71aa6d975388224f70821421800cea"}, - {file = "numpy-2.3.3-cp313-cp313t-win32.whl", hash = "sha256:a333b4ed33d8dc2b373cc955ca57babc00cd6f9009991d9edc5ddbc1bac36bcd"}, - {file = "numpy-2.3.3-cp313-cp313t-win_amd64.whl", hash = "sha256:4384a169c4d8f97195980815d6fcad04933a7e1ab3b530921c3fef7a1c63426d"}, - {file = "numpy-2.3.3-cp313-cp313t-win_arm64.whl", hash = "sha256:75370986cc0bc66f4ce5110ad35aae6d182cc4ce6433c40ad151f53690130bf1"}, - {file = "numpy-2.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:cd052f1fa6a78dee696b58a914b7229ecfa41f0a6d96dc663c1220a55e137593"}, - {file = "numpy-2.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:414a97499480067d305fcac9716c29cf4d0d76db6ebf0bf3cbce666677f12652"}, - {file = "numpy-2.3.3-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:50a5fe69f135f88a2be9b6ca0481a68a136f6febe1916e4920e12f1a34e708a7"}, - {file = "numpy-2.3.3-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:b912f2ed2b67a129e6a601e9d93d4fa37bef67e54cac442a2f588a54afe5c67a"}, - {file = "numpy-2.3.3-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9e318ee0596d76d4cb3d78535dc005fa60e5ea348cd131a51e99d0bdbe0b54fe"}, - {file = "numpy-2.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ce020080e4a52426202bdb6f7691c65bb55e49f261f31a8f506c9f6bc7450421"}, - {file = "numpy-2.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:e6687dc183aa55dae4a705b35f9c0f8cb178bcaa2f029b241ac5356221d5c021"}, - {file = "numpy-2.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d8f3b1080782469fdc1718c4ed1d22549b5fb12af0d57d35e992158a772a37cf"}, - {file = "numpy-2.3.3-cp314-cp314-win32.whl", hash = "sha256:cb248499b0bc3be66ebd6578b83e5acacf1d6cb2a77f2248ce0e40fbec5a76d0"}, - {file = "numpy-2.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:691808c2b26b0f002a032c73255d0bd89751425f379f7bcd22d140db593a96e8"}, - {file = "numpy-2.3.3-cp314-cp314-win_arm64.whl", hash = "sha256:9ad12e976ca7b10f1774b03615a2a4bab8addce37ecc77394d8e986927dc0dfe"}, - {file = "numpy-2.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9cc48e09feb11e1db00b320e9d30a4151f7369afb96bd0e48d942d09da3a0d00"}, - {file = "numpy-2.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:901bf6123879b7f251d3631967fd574690734236075082078e0571977c6a8e6a"}, - {file = "numpy-2.3.3-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:7f025652034199c301049296b59fa7d52c7e625017cae4c75d8662e377bf487d"}, - {file = "numpy-2.3.3-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:533ca5f6d325c80b6007d4d7fb1984c303553534191024ec6a524a4c92a5935a"}, - {file = "numpy-2.3.3-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0edd58682a399824633b66885d699d7de982800053acf20be1eaa46d92009c54"}, - {file = "numpy-2.3.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:367ad5d8fbec5d9296d18478804a530f1191e24ab4d75ab408346ae88045d25e"}, - {file = "numpy-2.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8f6ac61a217437946a1fa48d24c47c91a0c4f725237871117dea264982128097"}, - {file = "numpy-2.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:179a42101b845a816d464b6fe9a845dfaf308fdfc7925387195570789bb2c970"}, - {file = "numpy-2.3.3-cp314-cp314t-win32.whl", hash = "sha256:1250c5d3d2562ec4174bce2e3a1523041595f9b651065e4a4473f5f48a6bc8a5"}, - {file = "numpy-2.3.3-cp314-cp314t-win_amd64.whl", hash = "sha256:b37a0b2e5935409daebe82c1e42274d30d9dd355852529eab91dab8dcca7419f"}, - {file = "numpy-2.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:78c9f6560dc7e6b3990e32df7ea1a50bbd0e2a111e05209963f5ddcab7073b0b"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:1e02c7159791cd481e1e6d5ddd766b62a4d5acf8df4d4d1afe35ee9c5c33a41e"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:dca2d0fc80b3893ae72197b39f69d55a3cd8b17ea1b50aa4c62de82419936150"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:99683cbe0658f8271b333a1b1b4bb3173750ad59c0c61f5bbdc5b318918fffe3"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-macosx_14_0_x86_64.whl", hash = "sha256:d9d537a39cc9de668e5cd0e25affb17aec17b577c6b3ae8a3d866b479fbe88d0"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8596ba2f8af5f93b01d97563832686d20206d303024777f6dfc2e7c7c3f1850e"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e1ec5615b05369925bd1125f27df33f3b6c8bc10d788d5999ecd8769a1fa04db"}, - {file = "numpy-2.3.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:2e267c7da5bf7309670523896df97f93f6e469fb931161f483cd6882b3b1a5dc"}, - {file = "numpy-2.3.3.tar.gz", hash = "sha256:ddc7c39727ba62b80dfdbedf400d1c10ddfa8eefbd7ec8dcb118be8b56d31029"}, + {file = "numpy-1.26.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9ff0f4f29c51e2803569d7a51c2304de5554655a60c5d776e35b4a41413830d0"}, + {file = "numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2e4ee3380d6de9c9ec04745830fd9e2eccb3e6cf790d39d7b98ffd19b0dd754a"}, + {file = "numpy-1.26.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d209d8969599b27ad20994c8e41936ee0964e6da07478d6c35016bc386b66ad4"}, + {file = "numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ffa75af20b44f8dba823498024771d5ac50620e6915abac414251bd971b4529f"}, + {file = "numpy-1.26.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:62b8e4b1e28009ef2846b4c7852046736bab361f7aeadeb6a5b89ebec3c7055a"}, + {file = "numpy-1.26.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a4abb4f9001ad2858e7ac189089c42178fcce737e4169dc61321660f1a96c7d2"}, + {file = "numpy-1.26.4-cp310-cp310-win32.whl", hash = "sha256:bfe25acf8b437eb2a8b2d49d443800a5f18508cd811fea3181723922a8a82b07"}, + {file = "numpy-1.26.4-cp310-cp310-win_amd64.whl", hash = "sha256:b97fe8060236edf3662adfc2c633f56a08ae30560c56310562cb4f95500022d5"}, + {file = "numpy-1.26.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4c66707fabe114439db9068ee468c26bbdf909cac0fb58686a42a24de1760c71"}, + {file = "numpy-1.26.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:edd8b5fe47dab091176d21bb6de568acdd906d1887a4584a15a9a96a1dca06ef"}, + {file = "numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7ab55401287bfec946ced39700c053796e7cc0e3acbef09993a9ad2adba6ca6e"}, + {file = "numpy-1.26.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:666dbfb6ec68962c033a450943ded891bed2d54e6755e35e5835d63f4f6931d5"}, + {file = "numpy-1.26.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:96ff0b2ad353d8f990b63294c8986f1ec3cb19d749234014f4e7eb0112ceba5a"}, + {file = "numpy-1.26.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:60dedbb91afcbfdc9bc0b1f3f402804070deed7392c23eb7a7f07fa857868e8a"}, + {file = "numpy-1.26.4-cp311-cp311-win32.whl", hash = "sha256:1af303d6b2210eb850fcf03064d364652b7120803a0b872f5211f5234b399f20"}, + {file = "numpy-1.26.4-cp311-cp311-win_amd64.whl", hash = "sha256:cd25bcecc4974d09257ffcd1f098ee778f7834c3ad767fe5db785be9a4aa9cb2"}, + {file = "numpy-1.26.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b3ce300f3644fb06443ee2222c2201dd3a89ea6040541412b8fa189341847218"}, + {file = "numpy-1.26.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:03a8c78d01d9781b28a6989f6fa1bb2c4f2d51201cf99d3dd875df6fbd96b23b"}, + {file = "numpy-1.26.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9fad7dcb1aac3c7f0584a5a8133e3a43eeb2fe127f47e3632d43d677c66c102b"}, + {file = "numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:675d61ffbfa78604709862923189bad94014bef562cc35cf61d3a07bba02a7ed"}, + {file = "numpy-1.26.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab47dbe5cc8210f55aa58e4805fe224dac469cde56b9f731a4c098b91917159a"}, + {file = "numpy-1.26.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:1dda2e7b4ec9dd512f84935c5f126c8bd8b9f2fc001e9f54af255e8c5f16b0e0"}, + {file = "numpy-1.26.4-cp312-cp312-win32.whl", hash = "sha256:50193e430acfc1346175fcbdaa28ffec49947a06918b7b92130744e81e640110"}, + {file = "numpy-1.26.4-cp312-cp312-win_amd64.whl", hash = "sha256:08beddf13648eb95f8d867350f6a018a4be2e5ad54c8d8caed89ebca558b2818"}, + {file = "numpy-1.26.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7349ab0fa0c429c82442a27a9673fc802ffdb7c7775fad780226cb234965e53c"}, + {file = "numpy-1.26.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:52b8b60467cd7dd1e9ed082188b4e6bb35aa5cdd01777621a1658910745b90be"}, + {file = "numpy-1.26.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d5241e0a80d808d70546c697135da2c613f30e28251ff8307eb72ba696945764"}, + {file = "numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f870204a840a60da0b12273ef34f7051e98c3b5961b61b0c2c1be6dfd64fbcd3"}, + {file = "numpy-1.26.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:679b0076f67ecc0138fd2ede3a8fd196dddc2ad3254069bcb9faf9a79b1cebcd"}, + {file = "numpy-1.26.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:47711010ad8555514b434df65f7d7b076bb8261df1ca9bb78f53d3b2db02e95c"}, + {file = "numpy-1.26.4-cp39-cp39-win32.whl", hash = "sha256:a354325ee03388678242a4d7ebcd08b5c727033fcff3b2f536aea978e15ee9e6"}, + {file = "numpy-1.26.4-cp39-cp39-win_amd64.whl", hash = "sha256:3373d5d70a5fe74a2c1bb6d2cfd9609ecf686d47a2d7b1d37a8f3b6bf6003aea"}, + {file = "numpy-1.26.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:afedb719a9dcfc7eaf2287b839d8198e06dcd4cb5d276a3df279231138e83d30"}, + {file = "numpy-1.26.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95a7476c59002f2f6c590b9b7b998306fba6a5aa646b1e22ddfeaf8f78c3a29c"}, + {file = "numpy-1.26.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7e50d0a0cc3189f9cb0aeb3a6a6af18c16f59f004b866cd2be1c14b36134a4a0"}, + {file = "numpy-1.26.4.tar.gz", hash = "sha256:2a02aba9ed12e4ac4eb3ea9421c420301a0c6460d9830d74a9df87efa4912010"}, ] [[package]] @@ -742,6 +822,169 @@ files = [ dev = ["abi3audit", "black", "check-manifest", "coverage", "packaging", "pylint", "pyperf", "pypinfo", "pyreadline ; os_name == \"nt\"", "pytest", "pytest-cov", "pytest-instafail", "pytest-subtests", "pytest-xdist", "pywin32 ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "requests", "rstcheck", "ruff", "setuptools", "sphinx", "sphinx_rtd_theme", "toml-sort", "twine", "virtualenv", "vulture", "wheel", "wheel ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "wmi ; os_name == \"nt\" and platform_python_implementation != \"PyPy\""] test = ["pytest", "pytest-instafail", "pytest-subtests", "pytest-xdist", "pywin32 ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "setuptools", "wheel ; os_name == \"nt\" and platform_python_implementation != \"PyPy\"", "wmi ; os_name == \"nt\" and platform_python_implementation != \"PyPy\""] +[[package]] +name = "pydantic" +version = "2.12.0" +description = "Data validation using Python type hints" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "pydantic-2.12.0-py3-none-any.whl", hash = "sha256:f6a1da352d42790537e95e83a8bdfb91c7efbae63ffd0b86fa823899e807116f"}, + {file = "pydantic-2.12.0.tar.gz", hash = "sha256:c1a077e6270dbfb37bfd8b498b3981e2bb18f68103720e51fa6c306a5a9af563"}, +] + +[package.dependencies] +annotated-types = ">=0.6.0" +pydantic-core = "2.41.1" +typing-extensions = ">=4.14.1" +typing-inspection = ">=0.4.2" + +[package.extras] +email = ["email-validator (>=2.0.0)"] +timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""] + +[[package]] +name = "pydantic-core" +version = "2.41.1" +description = "Core functionality for Pydantic validation and serialization" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "pydantic_core-2.41.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:e63036298322e9aea1c8b7c0a6c1204d615dbf6ec0668ce5b83ff27f07404a61"}, + {file = "pydantic_core-2.41.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:241299ca91fc77ef64f11ed909d2d9220a01834e8e6f8de61275c4dd16b7c936"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1ab7e594a2a5c24ab8013a7dc8cfe5f2260e80e490685814122081705c2cf2b0"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b054ef1a78519cb934b58e9c90c09e93b837c935dcd907b891f2b265b129eb6e"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f2ab7d10d0ab2ed6da54c757233eb0f48ebfb4f86e9b88ccecb3f92bbd61a538"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2757606b7948bb853a27e4040820306eaa0ccb9e8f9f8a0fa40cb674e170f350"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cec0e75eb61f606bad0a32f2be87507087514e26e8c73db6cbdb8371ccd27917"}, + {file = "pydantic_core-2.41.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0234236514f44a5bf552105cfe2543a12f48203397d9d0f866affa569345a5b5"}, + {file = "pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:1b974e41adfbb4ebb0f65fc4ca951347b17463d60893ba7d5f7b9bb087c83897"}, + {file = "pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:248dafb3204136113c383e91a4d815269f51562b6659b756cf3df14eefc7d0bb"}, + {file = "pydantic_core-2.41.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:678f9d76a91d6bcedd7568bbf6beb77ae8447f85d1aeebaab7e2f0829cfc3a13"}, + {file = "pydantic_core-2.41.1-cp310-cp310-win32.whl", hash = "sha256:dff5bee1d21ee58277900692a641925d2dddfde65182c972569b1a276d2ac8fb"}, + {file = "pydantic_core-2.41.1-cp310-cp310-win_amd64.whl", hash = "sha256:5042da12e5d97d215f91567110fdfa2e2595a25f17c19b9ff024f31c34f9b53e"}, + {file = "pydantic_core-2.41.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4f276a6134fe1fc1daa692642a3eaa2b7b858599c49a7610816388f5e37566a1"}, + {file = "pydantic_core-2.41.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:07588570a805296ece009c59d9a679dc08fab72fb337365afb4f3a14cfbfc176"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28527e4b53400cd60ffbd9812ccb2b5135d042129716d71afd7e45bf42b855c0"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:46a1c935c9228bad738c8a41de06478770927baedf581d172494ab36a6b96575"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:447ddf56e2b7d28d200d3e9eafa936fe40485744b5a824b67039937580b3cb20"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:63892ead40c1160ac860b5debcc95c95c5a0035e543a8b5a4eac70dd22e995f4"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4a9543ca355e6df8fbe9c83e9faab707701e9103ae857ecb40f1c0cf8b0e94d"}, + {file = "pydantic_core-2.41.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f2611bdb694116c31e551ed82e20e39a90bea9b7ad9e54aaf2d045ad621aa7a1"}, + {file = "pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fecc130893a9b5f7bfe230be1bb8c61fe66a19db8ab704f808cb25a82aad0bc9"}, + {file = "pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:1e2df5f8344c99b6ea5219f00fdc8950b8e6f2c422fbc1cc122ec8641fac85a1"}, + {file = "pydantic_core-2.41.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:35291331e9d8ed94c257bab6be1cb3a380b5eee570a2784bffc055e18040a2ea"}, + {file = "pydantic_core-2.41.1-cp311-cp311-win32.whl", hash = "sha256:2876a095292668d753f1a868c4a57c4ac9f6acbd8edda8debe4218d5848cf42f"}, + {file = "pydantic_core-2.41.1-cp311-cp311-win_amd64.whl", hash = "sha256:b92d6c628e9a338846a28dfe3fcdc1a3279388624597898b105e078cdfc59298"}, + {file = "pydantic_core-2.41.1-cp311-cp311-win_arm64.whl", hash = "sha256:7d82ae99409eb69d507a89835488fb657faa03ff9968a9379567b0d2e2e56bc5"}, + {file = "pydantic_core-2.41.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:db2f82c0ccbce8f021ad304ce35cbe02aa2f95f215cac388eed542b03b4d5eb4"}, + {file = "pydantic_core-2.41.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:47694a31c710ced9205d5f1e7e8af3ca57cbb8a503d98cb9e33e27c97a501601"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:93e9decce94daf47baf9e9d392f5f2557e783085f7c5e522011545d9d6858e00"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ab0adafdf2b89c8b84f847780a119437a0931eca469f7b44d356f2b426dd9741"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5da98cc81873f39fd56882e1569c4677940fbc12bce6213fad1ead784192d7c8"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:209910e88afb01fd0fd403947b809ba8dba0e08a095e1f703294fda0a8fdca51"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:365109d1165d78d98e33c5bfd815a9b5d7d070f578caefaabcc5771825b4ecb5"}, + {file = "pydantic_core-2.41.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:706abf21e60a2857acdb09502bc853ee5bce732955e7b723b10311114f033115"}, + {file = "pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bf0bd5417acf7f6a7ec3b53f2109f587be176cb35f9cf016da87e6017437a72d"}, + {file = "pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:2e71b1c6ceb9c78424ae9f63a07292fb769fb890a4e7efca5554c47f33a60ea5"}, + {file = "pydantic_core-2.41.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:80745b9770b4a38c25015b517451c817799bfb9d6499b0d13d8227ec941cb513"}, + {file = "pydantic_core-2.41.1-cp312-cp312-win32.whl", hash = "sha256:83b64d70520e7890453f1aa21d66fda44e7b35f1cfea95adf7b4289a51e2b479"}, + {file = "pydantic_core-2.41.1-cp312-cp312-win_amd64.whl", hash = "sha256:377defd66ee2003748ee93c52bcef2d14fde48fe28a0b156f88c3dbf9bc49a50"}, + {file = "pydantic_core-2.41.1-cp312-cp312-win_arm64.whl", hash = "sha256:c95caff279d49c1d6cdfe2996e6c2ad712571d3b9caaa209a404426c326c4bde"}, + {file = "pydantic_core-2.41.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:70e790fce5f05204ef4403159857bfcd587779da78627b0babb3654f75361ebf"}, + {file = "pydantic_core-2.41.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9cebf1ca35f10930612d60bd0f78adfacee824c30a880e3534ba02c207cceceb"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:170406a37a5bc82c22c3274616bf6f17cc7df9c4a0a0a50449e559cb755db669"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12d4257fc9187a0ccd41b8b327d6a4e57281ab75e11dda66a9148ef2e1fb712f"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a75a33b4db105dd1c8d57839e17ee12db8d5ad18209e792fa325dbb4baeb00f4"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08a589f850803a74e0fcb16a72081cafb0d72a3cdda500106942b07e76b7bf62"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a97939d6ea44763c456bd8a617ceada2c9b96bb5b8ab3dfa0d0827df7619014"}, + {file = "pydantic_core-2.41.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d2ae423c65c556f09569524b80ffd11babff61f33055ef9773d7c9fabc11ed8d"}, + {file = "pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:4dc703015fbf8764d6a8001c327a87f1823b7328d40b47ce6000c65918ad2b4f"}, + {file = "pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:968e4ffdfd35698a5fe659e5e44c508b53664870a8e61c8f9d24d3d145d30257"}, + {file = "pydantic_core-2.41.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:fff2b76c8e172d34771cd4d4f0ade08072385310f214f823b5a6ad4006890d32"}, + {file = "pydantic_core-2.41.1-cp313-cp313-win32.whl", hash = "sha256:a38a5263185407ceb599f2f035faf4589d57e73c7146d64f10577f6449e8171d"}, + {file = "pydantic_core-2.41.1-cp313-cp313-win_amd64.whl", hash = "sha256:b42ae7fd6760782c975897e1fdc810f483b021b32245b0105d40f6e7a3803e4b"}, + {file = "pydantic_core-2.41.1-cp313-cp313-win_arm64.whl", hash = "sha256:ad4111acc63b7384e205c27a2f15e23ac0ee21a9d77ad6f2e9cb516ec90965fb"}, + {file = "pydantic_core-2.41.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:440d0df7415b50084a4ba9d870480c16c5f67c0d1d4d5119e3f70925533a0edc"}, + {file = "pydantic_core-2.41.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:71eaa38d342099405dae6484216dcf1e8e4b0bebd9b44a4e08c9b43db6a2ab67"}, + {file = "pydantic_core-2.41.1-cp313-cp313t-win_amd64.whl", hash = "sha256:555ecf7e50f1161d3f693bc49f23c82cf6cdeafc71fa37a06120772a09a38795"}, + {file = "pydantic_core-2.41.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:05226894a26f6f27e1deb735d7308f74ef5fa3a6de3e0135bb66cdcaee88f64b"}, + {file = "pydantic_core-2.41.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:85ff7911c6c3e2fd8d3779c50925f6406d770ea58ea6dde9c230d35b52b16b4a"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47f1f642a205687d59b52dc1a9a607f45e588f5a2e9eeae05edd80c7a8c47674"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:df11c24e138876ace5ec6043e5cae925e34cf38af1a1b3d63589e8f7b5f5cdc4"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7f0bf7f5c8f7bf345c527e8a0d72d6b26eda99c1227b0c34e7e59e181260de31"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82b887a711d341c2c47352375d73b029418f55b20bd7815446d175a70effa706"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5f1d5d6bbba484bdf220c72d8ecd0be460f4bd4c5e534a541bb2cd57589fb8b"}, + {file = "pydantic_core-2.41.1-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2bf1917385ebe0f968dc5c6ab1375886d56992b93ddfe6bf52bff575d03662be"}, + {file = "pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:4f94f3ab188f44b9a73f7295663f3ecb8f2e2dd03a69c8f2ead50d37785ecb04"}, + {file = "pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:3925446673641d37c30bd84a9d597e49f72eacee8b43322c8999fa17d5ae5bc4"}, + {file = "pydantic_core-2.41.1-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:49bd51cc27adb980c7b97357ae036ce9b3c4d0bb406e84fbe16fb2d368b602a8"}, + {file = "pydantic_core-2.41.1-cp314-cp314-win32.whl", hash = "sha256:a31ca0cd0e4d12ea0df0077df2d487fc3eb9d7f96bbb13c3c5b88dcc21d05159"}, + {file = "pydantic_core-2.41.1-cp314-cp314-win_amd64.whl", hash = "sha256:1b5c4374a152e10a22175d7790e644fbd8ff58418890e07e2073ff9d4414efae"}, + {file = "pydantic_core-2.41.1-cp314-cp314-win_arm64.whl", hash = "sha256:4fee76d757639b493eb600fba668f1e17475af34c17dd61db7a47e824d464ca9"}, + {file = "pydantic_core-2.41.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f9b9c968cfe5cd576fdd7361f47f27adeb120517e637d1b189eea1c3ece573f4"}, + {file = "pydantic_core-2.41.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1ebc7ab67b856384aba09ed74e3e977dded40e693de18a4f197c67d0d4e6d8e"}, + {file = "pydantic_core-2.41.1-cp314-cp314t-win_amd64.whl", hash = "sha256:8ae0dc57b62a762985bc7fbf636be3412394acc0ddb4ade07fe104230f1b9762"}, + {file = "pydantic_core-2.41.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:10ce489cf09a4956a1549af839b983edc59b0f60e1b068c21b10154e58f54f80"}, + {file = "pydantic_core-2.41.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ff548c908caffd9455fd1342366bcf8a1ec8a3fca42f35c7fc60883d6a901074"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d43bf082025082bda13be89a5f876cc2386b7727c7b322be2d2b706a45cea8e"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:666aee751faf1c6864b2db795775dd67b61fdcf646abefa309ed1da039a97209"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b83aaeff0d7bde852c32e856f3ee410842ebc08bc55c510771d87dcd1c01e1ed"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:055c7931b0329cb8acde20cdde6d9c2cbc2a02a0a8e54a792cddd91e2ea92c65"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:530bbb1347e3e5ca13a91ac087c4971d7da09630ef8febd27a20a10800c2d06d"}, + {file = "pydantic_core-2.41.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:65a0ea16cfea7bfa9e43604c8bd726e63a3788b61c384c37664b55209fcb1d74"}, + {file = "pydantic_core-2.41.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8fa93fadff794c6d15c345c560513b160197342275c6d104cc879f932b978afc"}, + {file = "pydantic_core-2.41.1-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:c8a1af9ac51969a494c6a82b563abae6859dc082d3b999e8fa7ba5ee1b05e8e8"}, + {file = "pydantic_core-2.41.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:30edab28829703f876897c9471a857e43d847b8799c3c9e2fbce644724b50aa4"}, + {file = "pydantic_core-2.41.1-cp39-cp39-win32.whl", hash = "sha256:84d0ff869f98be2e93efdf1ae31e5a15f0926d22af8677d51676e373abbfe57a"}, + {file = "pydantic_core-2.41.1-cp39-cp39-win_amd64.whl", hash = "sha256:b5674314987cdde5a5511b029fa5fb1556b3d147a367e01dd583b19cfa8e35df"}, + {file = "pydantic_core-2.41.1-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:68f2251559b8efa99041bb63571ec7cdd2d715ba74cc82b3bc9eff824ebc8bf0"}, + {file = "pydantic_core-2.41.1-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:c7bc140c596097cb53b30546ca257dbe3f19282283190b1b5142928e5d5d3a20"}, + {file = "pydantic_core-2.41.1-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2896510fce8f4725ec518f8b9d7f015a00db249d2fd40788f442af303480063d"}, + {file = "pydantic_core-2.41.1-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ced20e62cfa0f496ba68fa5d6c7ee71114ea67e2a5da3114d6450d7f4683572a"}, + {file = "pydantic_core-2.41.1-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:b04fa9ed049461a7398138c604b00550bc89e3e1151d84b81ad6dc93e39c4c06"}, + {file = "pydantic_core-2.41.1-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:b3b7d9cfbfdc43c80a16638c6dc2768e3956e73031fca64e8e1a3ae744d1faeb"}, + {file = "pydantic_core-2.41.1-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eec83fc6abef04c7f9bec616e2d76ee9a6a4ae2a359b10c21d0f680e24a247ca"}, + {file = "pydantic_core-2.41.1-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6771a2d9f83c4038dfad5970a3eef215940682b2175e32bcc817bdc639019b28"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:fabcbdb12de6eada8d6e9a759097adb3c15440fafc675b3e94ae5c9cb8d678a0"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:80e97ccfaf0aaf67d55de5085b0ed0d994f57747d9d03f2de5cc9847ca737b08"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34df1fe8fea5d332484a763702e8b6a54048a9d4fe6ccf41e34a128238e01f52"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:421b5595f845842fc093f7250e24ee395f54ca62d494fdde96f43ecf9228ae01"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:dce8b22663c134583aaad24827863306a933f576c79da450be3984924e2031d1"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:300a9c162fea9906cc5c103893ca2602afd84f0ec90d3be36f4cc360125d22e1"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e019167628f6e6161ae7ab9fb70f6d076a0bf0d55aa9b20833f86a320c70dd65"}, + {file = "pydantic_core-2.41.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:13ab9cc2de6f9d4ab645a050ae5aee61a2424ac4d3a16ba23d4c2027705e0301"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:af2385d3f98243fb733862f806c5bb9122e5fba05b373e3af40e3c82d711cef1"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6550617a0c2115be56f90c31a5370261d8ce9dbf051c3ed53b51172dd34da696"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc17b6ecf4983d298686014c92ebc955a9f9baf9f57dad4065e7906e7bee6222"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:42ae9352cf211f08b04ea110563d6b1e415878eea5b4c70f6bdb17dca3b932d2"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e82947de92068b0a21681a13dd2102387197092fbe7defcfb8453e0913866506"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:e244c37d5471c9acdcd282890c6c4c83747b77238bfa19429b8473586c907656"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:1e798b4b304a995110d41ec93653e57975620ccb2842ba9420037985e7d7284e"}, + {file = "pydantic_core-2.41.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:f1fc716c0eb1663c59699b024428ad5ec2bcc6b928527b8fe28de6cb89f47efb"}, + {file = "pydantic_core-2.41.1.tar.gz", hash = "sha256:1ad375859a6d8c356b7704ec0f547a58e82ee80bb41baa811ad710e124bc8f2f"}, +] + +[package.dependencies] +typing-extensions = ">=4.14.1" + +[[package]] +name = "python-dotenv" +version = "1.1.1" +description = "Read key-value pairs from a .env file and set them as environment variables" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc"}, + {file = "python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab"}, +] + +[package.extras] +cli = ["click (>=5.0)"] + [[package]] name = "pyyaml" version = "6.0.3" @@ -1004,141 +1247,6 @@ testing = ["h5py (>=3.7.0)", "huggingface-hub (>=0.12.1)", "hypothesis (>=6.70.2 testingfree = ["huggingface-hub (>=0.12.1)", "hypothesis (>=6.70.2)", "pytest (>=7.2.0)", "pytest-benchmark (>=4.0.0)", "safetensors[numpy]", "setuptools-rust (>=1.5.2)"] torch = ["safetensors[numpy]", "torch (>=1.10)"] -[[package]] -name = "scikit-learn" -version = "1.7.2" -description = "A set of python modules for machine learning and data mining" -optional = false -python-versions = ">=3.10" -groups = ["main"] -files = [ - {file = "scikit_learn-1.7.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b33579c10a3081d076ab403df4a4190da4f4432d443521674637677dc91e61f"}, - {file = "scikit_learn-1.7.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:36749fb62b3d961b1ce4fedf08fa57a1986cd409eff2d783bca5d4b9b5fce51c"}, - {file = "scikit_learn-1.7.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7a58814265dfc52b3295b1900cfb5701589d30a8bb026c7540f1e9d3499d5ec8"}, - {file = "scikit_learn-1.7.2-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4a847fea807e278f821a0406ca01e387f97653e284ecbd9750e3ee7c90347f18"}, - {file = "scikit_learn-1.7.2-cp310-cp310-win_amd64.whl", hash = "sha256:ca250e6836d10e6f402436d6463d6c0e4d8e0234cfb6a9a47835bd392b852ce5"}, - {file = "scikit_learn-1.7.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c7509693451651cd7361d30ce4e86a1347493554f172b1c72a39300fa2aea79e"}, - {file = "scikit_learn-1.7.2-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:0486c8f827c2e7b64837c731c8feff72c0bd2b998067a8a9cbc10643c31f0fe1"}, - {file = "scikit_learn-1.7.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:89877e19a80c7b11a2891a27c21c4894fb18e2c2e077815bcade10d34287b20d"}, - {file = "scikit_learn-1.7.2-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8da8bf89d4d79aaec192d2bda62f9b56ae4e5b4ef93b6a56b5de4977e375c1f1"}, - {file = "scikit_learn-1.7.2-cp311-cp311-win_amd64.whl", hash = "sha256:9b7ed8d58725030568523e937c43e56bc01cadb478fc43c042a9aca1dacb3ba1"}, - {file = "scikit_learn-1.7.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:8d91a97fa2b706943822398ab943cde71858a50245e31bc71dba62aab1d60a96"}, - {file = "scikit_learn-1.7.2-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:acbc0f5fd2edd3432a22c69bed78e837c70cf896cd7993d71d51ba6708507476"}, - {file = "scikit_learn-1.7.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e5bf3d930aee75a65478df91ac1225ff89cd28e9ac7bd1196853a9229b6adb0b"}, - {file = "scikit_learn-1.7.2-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b4d6e9deed1a47aca9fe2f267ab8e8fe82ee20b4526b2c0cd9e135cea10feb44"}, - {file = "scikit_learn-1.7.2-cp312-cp312-win_amd64.whl", hash = "sha256:6088aa475f0785e01bcf8529f55280a3d7d298679f50c0bb70a2364a82d0b290"}, - {file = "scikit_learn-1.7.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0b7dacaa05e5d76759fb071558a8b5130f4845166d88654a0f9bdf3eb57851b7"}, - {file = "scikit_learn-1.7.2-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:abebbd61ad9e1deed54cca45caea8ad5f79e1b93173dece40bb8e0c658dbe6fe"}, - {file = "scikit_learn-1.7.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:502c18e39849c0ea1a5d681af1dbcf15f6cce601aebb657aabbfe84133c1907f"}, - {file = "scikit_learn-1.7.2-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7a4c328a71785382fe3fe676a9ecf2c86189249beff90bf85e22bdb7efaf9ae0"}, - {file = "scikit_learn-1.7.2-cp313-cp313-win_amd64.whl", hash = "sha256:63a9afd6f7b229aad94618c01c252ce9e6fa97918c5ca19c9a17a087d819440c"}, - {file = "scikit_learn-1.7.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:9acb6c5e867447b4e1390930e3944a005e2cb115922e693c08a323421a6966e8"}, - {file = "scikit_learn-1.7.2-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:2a41e2a0ef45063e654152ec9d8bcfc39f7afce35b08902bfe290c2498a67a6a"}, - {file = "scikit_learn-1.7.2-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:98335fb98509b73385b3ab2bd0639b1f610541d3988ee675c670371d6a87aa7c"}, - {file = "scikit_learn-1.7.2-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:191e5550980d45449126e23ed1d5e9e24b2c68329ee1f691a3987476e115e09c"}, - {file = "scikit_learn-1.7.2-cp313-cp313t-win_amd64.whl", hash = "sha256:57dc4deb1d3762c75d685507fbd0bc17160144b2f2ba4ccea5dc285ab0d0e973"}, - {file = "scikit_learn-1.7.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fa8f63940e29c82d1e67a45d5297bdebbcb585f5a5a50c4914cc2e852ab77f33"}, - {file = "scikit_learn-1.7.2-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:f95dc55b7902b91331fa4e5845dd5bde0580c9cd9612b1b2791b7e80c3d32615"}, - {file = "scikit_learn-1.7.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9656e4a53e54578ad10a434dc1f993330568cfee176dff07112b8785fb413106"}, - {file = "scikit_learn-1.7.2-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:96dc05a854add0e50d3f47a1ef21a10a595016da5b007c7d9cd9d0bffd1fcc61"}, - {file = "scikit_learn-1.7.2-cp314-cp314-win_amd64.whl", hash = "sha256:bb24510ed3f9f61476181e4db51ce801e2ba37541def12dc9333b946fc7a9cf8"}, - {file = "scikit_learn-1.7.2.tar.gz", hash = "sha256:20e9e49ecd130598f1ca38a1d85090e1a600147b9c02fa6f15d69cb53d968fda"}, -] - -[package.dependencies] -joblib = ">=1.2.0" -numpy = ">=1.22.0" -scipy = ">=1.8.0" -threadpoolctl = ">=3.1.0" - -[package.extras] -benchmark = ["matplotlib (>=3.5.0)", "memory_profiler (>=0.57.0)", "pandas (>=1.4.0)"] -build = ["cython (>=3.0.10)", "meson-python (>=0.17.1)", "numpy (>=1.22.0)", "scipy (>=1.8.0)"] -docs = ["Pillow (>=8.4.0)", "matplotlib (>=3.5.0)", "memory_profiler (>=0.57.0)", "numpydoc (>=1.2.0)", "pandas (>=1.4.0)", "plotly (>=5.14.0)", "polars (>=0.20.30)", "pooch (>=1.6.0)", "pydata-sphinx-theme (>=0.15.3)", "scikit-image (>=0.19.0)", "seaborn (>=0.9.0)", "sphinx (>=7.3.7)", "sphinx-copybutton (>=0.5.2)", "sphinx-design (>=0.5.0)", "sphinx-design (>=0.6.0)", "sphinx-gallery (>=0.17.1)", "sphinx-prompt (>=1.4.0)", "sphinx-remove-toctrees (>=1.0.0.post1)", "sphinxcontrib-sass (>=0.3.4)", "sphinxext-opengraph (>=0.9.1)", "towncrier (>=24.8.0)"] -examples = ["matplotlib (>=3.5.0)", "pandas (>=1.4.0)", "plotly (>=5.14.0)", "pooch (>=1.6.0)", "scikit-image (>=0.19.0)", "seaborn (>=0.9.0)"] -install = ["joblib (>=1.2.0)", "numpy (>=1.22.0)", "scipy (>=1.8.0)", "threadpoolctl (>=3.1.0)"] -maintenance = ["conda-lock (==3.0.1)"] -tests = ["matplotlib (>=3.5.0)", "mypy (>=1.15)", "numpydoc (>=1.2.0)", "pandas (>=1.4.0)", "polars (>=0.20.30)", "pooch (>=1.6.0)", "pyamg (>=4.2.1)", "pyarrow (>=12.0.0)", "pytest (>=7.1.2)", "pytest-cov (>=2.9.0)", "ruff (>=0.11.7)", "scikit-image (>=0.19.0)"] - -[[package]] -name = "scipy" -version = "1.16.2" -description = "Fundamental algorithms for scientific computing in Python" -optional = false -python-versions = ">=3.11" -groups = ["main"] -files = [ - {file = "scipy-1.16.2-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:6ab88ea43a57da1af33292ebd04b417e8e2eaf9d5aa05700be8d6e1b6501cd92"}, - {file = "scipy-1.16.2-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:c95e96c7305c96ede73a7389f46ccd6c659c4da5ef1b2789466baeaed3622b6e"}, - {file = "scipy-1.16.2-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:87eb178db04ece7c698220d523c170125dbffebb7af0345e66c3554f6f60c173"}, - {file = "scipy-1.16.2-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:4e409eac067dcee96a57fbcf424c13f428037827ec7ee3cb671ff525ca4fc34d"}, - {file = "scipy-1.16.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e574be127bb760f0dad24ff6e217c80213d153058372362ccb9555a10fc5e8d2"}, - {file = "scipy-1.16.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f5db5ba6188d698ba7abab982ad6973265b74bb40a1efe1821b58c87f73892b9"}, - {file = "scipy-1.16.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ec6e74c4e884104ae006d34110677bfe0098203a3fec2f3faf349f4cb05165e3"}, - {file = "scipy-1.16.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:912f46667d2d3834bc3d57361f854226475f695eb08c08a904aadb1c936b6a88"}, - {file = "scipy-1.16.2-cp311-cp311-win_amd64.whl", hash = "sha256:91e9e8a37befa5a69e9cacbe0bcb79ae5afb4a0b130fd6db6ee6cc0d491695fa"}, - {file = "scipy-1.16.2-cp311-cp311-win_arm64.whl", hash = "sha256:f3bf75a6dcecab62afde4d1f973f1692be013110cad5338007927db8da73249c"}, - {file = "scipy-1.16.2-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:89d6c100fa5c48472047632e06f0876b3c4931aac1f4291afc81a3644316bb0d"}, - {file = "scipy-1.16.2-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:ca748936cd579d3f01928b30a17dc474550b01272d8046e3e1ee593f23620371"}, - {file = "scipy-1.16.2-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:fac4f8ce2ddb40e2e3d0f7ec36d2a1e7f92559a2471e59aec37bd8d9de01fec0"}, - {file = "scipy-1.16.2-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:033570f1dcefd79547a88e18bccacff025c8c647a330381064f561d43b821232"}, - {file = "scipy-1.16.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ea3421209bf00c8a5ef2227de496601087d8f638a2363ee09af059bd70976dc1"}, - {file = "scipy-1.16.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f66bd07ba6f84cd4a380b41d1bf3c59ea488b590a2ff96744845163309ee8e2f"}, - {file = "scipy-1.16.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5e9feab931bd2aea4a23388c962df6468af3d808ddf2d40f94a81c5dc38f32ef"}, - {file = "scipy-1.16.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:03dfc75e52f72cf23ec2ced468645321407faad8f0fe7b1f5b49264adbc29cb1"}, - {file = "scipy-1.16.2-cp312-cp312-win_amd64.whl", hash = "sha256:0ce54e07bbb394b417457409a64fd015be623f36e330ac49306433ffe04bc97e"}, - {file = "scipy-1.16.2-cp312-cp312-win_arm64.whl", hash = "sha256:2a8ffaa4ac0df81a0b94577b18ee079f13fecdb924df3328fc44a7dc5ac46851"}, - {file = "scipy-1.16.2-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:84f7bf944b43e20b8a894f5fe593976926744f6c185bacfcbdfbb62736b5cc70"}, - {file = "scipy-1.16.2-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:5c39026d12edc826a1ef2ad35ad1e6d7f087f934bb868fc43fa3049c8b8508f9"}, - {file = "scipy-1.16.2-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:e52729ffd45b68777c5319560014d6fd251294200625d9d70fd8626516fc49f5"}, - {file = "scipy-1.16.2-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:024dd4a118cccec09ca3209b7e8e614931a6ffb804b2a601839499cb88bdf925"}, - {file = "scipy-1.16.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7a5dc7ee9c33019973a470556081b0fd3c9f4c44019191039f9769183141a4d9"}, - {file = "scipy-1.16.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c2275ff105e508942f99d4e3bc56b6ef5e4b3c0af970386ca56b777608ce95b7"}, - {file = "scipy-1.16.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:af80196eaa84f033e48444d2e0786ec47d328ba00c71e4299b602235ffef9acb"}, - {file = "scipy-1.16.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9fb1eb735fe3d6ed1f89918224e3385fbf6f9e23757cacc35f9c78d3b712dd6e"}, - {file = "scipy-1.16.2-cp313-cp313-win_amd64.whl", hash = "sha256:fda714cf45ba43c9d3bae8f2585c777f64e3f89a2e073b668b32ede412d8f52c"}, - {file = "scipy-1.16.2-cp313-cp313-win_arm64.whl", hash = "sha256:2f5350da923ccfd0b00e07c3e5cfb316c1c0d6c1d864c07a72d092e9f20db104"}, - {file = "scipy-1.16.2-cp313-cp313t-macosx_10_14_x86_64.whl", hash = "sha256:53d8d2ee29b925344c13bda64ab51785f016b1b9617849dac10897f0701b20c1"}, - {file = "scipy-1.16.2-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:9e05e33657efb4c6a9d23bd8300101536abd99c85cca82da0bffff8d8764d08a"}, - {file = "scipy-1.16.2-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:7fe65b36036357003b3ef9d37547abeefaa353b237e989c21027b8ed62b12d4f"}, - {file = "scipy-1.16.2-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:6406d2ac6d40b861cccf57f49592f9779071655e9f75cd4f977fa0bdd09cb2e4"}, - {file = "scipy-1.16.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ff4dc42bd321991fbf611c23fc35912d690f731c9914bf3af8f417e64aca0f21"}, - {file = "scipy-1.16.2-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:654324826654d4d9133e10675325708fb954bc84dae6e9ad0a52e75c6b1a01d7"}, - {file = "scipy-1.16.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:63870a84cd15c44e65220eaed2dac0e8f8b26bbb991456a033c1d9abfe8a94f8"}, - {file = "scipy-1.16.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:fa01f0f6a3050fa6a9771a95d5faccc8e2f5a92b4a2e5440a0fa7264a2398472"}, - {file = "scipy-1.16.2-cp313-cp313t-win_amd64.whl", hash = "sha256:116296e89fba96f76353a8579820c2512f6e55835d3fad7780fece04367de351"}, - {file = "scipy-1.16.2-cp313-cp313t-win_arm64.whl", hash = "sha256:98e22834650be81d42982360382b43b17f7ba95e0e6993e2a4f5b9ad9283a94d"}, - {file = "scipy-1.16.2-cp314-cp314-macosx_10_14_x86_64.whl", hash = "sha256:567e77755019bb7461513c87f02bb73fb65b11f049aaaa8ca17cfaa5a5c45d77"}, - {file = "scipy-1.16.2-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:17d9bb346194e8967296621208fcdfd39b55498ef7d2f376884d5ac47cec1a70"}, - {file = "scipy-1.16.2-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:0a17541827a9b78b777d33b623a6dcfe2ef4a25806204d08ead0768f4e529a88"}, - {file = "scipy-1.16.2-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:d7d4c6ba016ffc0f9568d012f5f1eb77ddd99412aea121e6fa8b4c3b7cbad91f"}, - {file = "scipy-1.16.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:9702c4c023227785c779cba2e1d6f7635dbb5b2e0936cdd3a4ecb98d78fd41eb"}, - {file = "scipy-1.16.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d1cdf0ac28948d225decdefcc45ad7dd91716c29ab56ef32f8e0d50657dffcc7"}, - {file = "scipy-1.16.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:70327d6aa572a17c2941cdfb20673f82e536e91850a2e4cb0c5b858b690e1548"}, - {file = "scipy-1.16.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5221c0b2a4b58aa7c4ed0387d360fd90ee9086d383bb34d9f2789fafddc8a936"}, - {file = "scipy-1.16.2-cp314-cp314-win_amd64.whl", hash = "sha256:f5a85d7b2b708025af08f060a496dd261055b617d776fc05a1a1cc69e09fe9ff"}, - {file = "scipy-1.16.2-cp314-cp314-win_arm64.whl", hash = "sha256:2cc73a33305b4b24556957d5857d6253ce1e2dcd67fa0ff46d87d1670b3e1e1d"}, - {file = "scipy-1.16.2-cp314-cp314t-macosx_10_14_x86_64.whl", hash = "sha256:9ea2a3fed83065d77367775d689401a703d0f697420719ee10c0780bcab594d8"}, - {file = "scipy-1.16.2-cp314-cp314t-macosx_12_0_arm64.whl", hash = "sha256:7280d926f11ca945c3ef92ba960fa924e1465f8d07ce3a9923080363390624c4"}, - {file = "scipy-1.16.2-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:8afae1756f6a1fe04636407ef7dbece33d826a5d462b74f3d0eb82deabefd831"}, - {file = "scipy-1.16.2-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:5c66511f29aa8d233388e7416a3f20d5cae7a2744d5cee2ecd38c081f4e861b3"}, - {file = "scipy-1.16.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:efe6305aeaa0e96b0ccca5ff647a43737d9a092064a3894e46c414db84bc54ac"}, - {file = "scipy-1.16.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7f3a337d9ae06a1e8d655ee9d8ecb835ea5ddcdcbd8d23012afa055ab014f374"}, - {file = "scipy-1.16.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:bab3605795d269067d8ce78a910220262711b753de8913d3deeaedb5dded3bb6"}, - {file = "scipy-1.16.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b0348d8ddb55be2a844c518cd8cc8deeeb8aeba707cf834db5758fc89b476a2c"}, - {file = "scipy-1.16.2-cp314-cp314t-win_amd64.whl", hash = "sha256:26284797e38b8a75e14ea6631d29bda11e76ceaa6ddb6fdebbfe4c4d90faf2f9"}, - {file = "scipy-1.16.2-cp314-cp314t-win_arm64.whl", hash = "sha256:d2a4472c231328d4de38d5f1f68fdd6d28a615138f842580a8a321b5845cf779"}, - {file = "scipy-1.16.2.tar.gz", hash = "sha256:af029b153d243a80afb6eabe40b0a07f8e35c9adc269c019f364ad747f826a6b"}, -] - -[package.dependencies] -numpy = ">=1.25.2,<2.6" - -[package.extras] -dev = ["cython-lint (>=0.12.2)", "doit (>=0.36.0)", "mypy (==1.10.0)", "pycodestyle", "pydevtool", "rich-click", "ruff (>=0.0.292)", "types-psutil", "typing_extensions"] -doc = ["intersphinx_registry", "jupyterlite-pyodide-kernel", "jupyterlite-sphinx (>=0.19.1)", "jupytext", "linkify-it-py", "matplotlib (>=3.5)", "myst-nb (>=1.2.0)", "numpydoc", "pooch", "pydata-sphinx-theme (>=0.15.2)", "sphinx (>=5.0.0,<8.2.0)", "sphinx-copybutton", "sphinx-design (>=0.4.0)"] -test = ["Cython", "array-api-strict (>=2.3.1)", "asv", "gmpy2", "hypothesis (>=6.30)", "meson", "mpmath", "ninja ; sys_platform != \"emscripten\"", "pooch", "pytest (>=8.0.0)", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"] - [[package]] name = "setuptools" version = "80.9.0" @@ -1160,6 +1268,36 @@ enabler = ["pytest-enabler (>=2.2)"] test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"] type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"] +[[package]] +name = "sniffio" +version = "1.3.1" +description = "Sniff out which async library your code is running under" +optional = false +python-versions = ">=3.7" +groups = ["main"] +files = [ + {file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"}, + {file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"}, +] + +[[package]] +name = "starlette" +version = "0.27.0" +description = "The little ASGI library that shines." +optional = false +python-versions = ">=3.7" +groups = ["main"] +files = [ + {file = "starlette-0.27.0-py3-none-any.whl", hash = "sha256:918416370e846586541235ccd38a474c08b80443ed31c578a418e2209b3eef91"}, + {file = "starlette-0.27.0.tar.gz", hash = "sha256:6a6b0d042acb8d469a01eba54e9cda6cbd24ac602c4cd016723117d6a7e73b75"}, +] + +[package.dependencies] +anyio = ">=3.4.0,<5" + +[package.extras] +full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart", "pyyaml"] + [[package]] name = "sympy" version = "1.14.0" @@ -1178,50 +1316,60 @@ mpmath = ">=1.1.0,<1.4" [package.extras] dev = ["hypothesis (>=6.70.0)", "pytest (>=7.1.0)"] -[[package]] -name = "threadpoolctl" -version = "3.6.0" -description = "threadpoolctl" -optional = false -python-versions = ">=3.9" -groups = ["main"] -files = [ - {file = "threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb"}, - {file = "threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e"}, -] - [[package]] name = "tokenizers" -version = "0.22.1" -description = "" +version = "0.13.3" +description = "Fast and Customizable Tokenizers" optional = false -python-versions = ">=3.9" +python-versions = "*" groups = ["main"] files = [ - {file = "tokenizers-0.22.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:59fdb013df17455e5f950b4b834a7b3ee2e0271e6378ccb33aa74d178b513c73"}, - {file = "tokenizers-0.22.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:8d4e484f7b0827021ac5f9f71d4794aaef62b979ab7608593da22b1d2e3c4edc"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:19d2962dd28bc67c1f205ab180578a78eef89ac60ca7ef7cbe9635a46a56422a"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:38201f15cdb1f8a6843e6563e6e79f4abd053394992b9bbdf5213ea3469b4ae7"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1cbe5454c9a15df1b3443c726063d930c16f047a3cc724b9e6e1a91140e5a21"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e7d094ae6312d69cc2a872b54b91b309f4f6fbce871ef28eb27b52a98e4d0214"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:afd7594a56656ace95cdd6df4cca2e4059d294c5cfb1679c57824b605556cb2f"}, - {file = "tokenizers-0.22.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e2ef6063d7a84994129732b47e7915e8710f27f99f3a3260b8a38fc7ccd083f4"}, - {file = "tokenizers-0.22.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ba0a64f450b9ef412c98f6bcd2a50c6df6e2443b560024a09fa6a03189726879"}, - {file = "tokenizers-0.22.1-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:331d6d149fa9c7d632cde4490fb8bbb12337fa3a0232e77892be656464f4b446"}, - {file = "tokenizers-0.22.1-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:607989f2ea68a46cb1dfbaf3e3aabdf3f21d8748312dbeb6263d1b3b66c5010a"}, - {file = "tokenizers-0.22.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a0f307d490295717726598ef6fa4f24af9d484809223bbc253b201c740a06390"}, - {file = "tokenizers-0.22.1-cp39-abi3-win32.whl", hash = "sha256:b5120eed1442765cd90b903bb6cfef781fd8fe64e34ccaecbae4c619b7b12a82"}, - {file = "tokenizers-0.22.1-cp39-abi3-win_amd64.whl", hash = "sha256:65fd6e3fb11ca1e78a6a93602490f134d1fdeb13bcef99389d5102ea318ed138"}, - {file = "tokenizers-0.22.1.tar.gz", hash = "sha256:61de6522785310a309b3407bac22d99c4db5dba349935e99e4d15ea2226af2d9"}, + {file = "tokenizers-0.13.3-cp310-cp310-macosx_10_11_x86_64.whl", hash = "sha256:f3835c5be51de8c0a092058a4d4380cb9244fb34681fd0a295fbf0a52a5fdf33"}, + {file = "tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:4ef4c3e821730f2692489e926b184321e887f34fb8a6b80b8096b966ba663d07"}, + {file = "tokenizers-0.13.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5fd1a6a25353e9aa762e2aae5a1e63883cad9f4e997c447ec39d071020459bc"}, + {file = "tokenizers-0.13.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ee0b1b311d65beab83d7a41c56a1e46ab732a9eed4460648e8eb0bd69fc2d059"}, + {file = "tokenizers-0.13.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ef4215284df1277dadbcc5e17d4882bda19f770d02348e73523f7e7d8b8d396"}, + {file = "tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a4d53976079cff8a033f778fb9adca2d9d69d009c02fa2d71a878b5f3963ed30"}, + {file = "tokenizers-0.13.3-cp310-cp310-win32.whl", hash = "sha256:1f0e3b4c2ea2cd13238ce43548959c118069db7579e5d40ec270ad77da5833ce"}, + {file = "tokenizers-0.13.3-cp310-cp310-win_amd64.whl", hash = "sha256:89649c00d0d7211e8186f7a75dfa1db6996f65edce4b84821817eadcc2d3c79e"}, + {file = "tokenizers-0.13.3-cp311-cp311-macosx_10_11_universal2.whl", hash = "sha256:56b726e0d2bbc9243872b0144515ba684af5b8d8cd112fb83ee1365e26ec74c8"}, + {file = "tokenizers-0.13.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:cc5c022ce692e1f499d745af293ab9ee6f5d92538ed2faf73f9708c89ee59ce6"}, + {file = "tokenizers-0.13.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f55c981ac44ba87c93e847c333e58c12abcbb377a0c2f2ef96e1a266e4184ff2"}, + {file = "tokenizers-0.13.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f247eae99800ef821a91f47c5280e9e9afaeed9980fc444208d5aa6ba69ff148"}, + {file = "tokenizers-0.13.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b3e3215d048e94f40f1c95802e45dcc37c5b05eb46280fc2ccc8cd351bff839"}, + {file = "tokenizers-0.13.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ba2b0bf01777c9b9bc94b53764d6684554ce98551fec496f71bc5be3a03e98b"}, + {file = "tokenizers-0.13.3-cp311-cp311-win32.whl", hash = "sha256:cc78d77f597d1c458bf0ea7c2a64b6aa06941c7a99cb135b5969b0278824d808"}, + {file = "tokenizers-0.13.3-cp311-cp311-win_amd64.whl", hash = "sha256:ecf182bf59bd541a8876deccf0360f5ae60496fd50b58510048020751cf1724c"}, + {file = "tokenizers-0.13.3-cp37-cp37m-macosx_10_11_x86_64.whl", hash = "sha256:0527dc5436a1f6bf2c0327da3145687d3bcfbeab91fed8458920093de3901b44"}, + {file = "tokenizers-0.13.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:07cbb2c307627dc99b44b22ef05ff4473aa7c7cc1fec8f0a8b37d8a64b1a16d2"}, + {file = "tokenizers-0.13.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4560dbdeaae5b7ee0d4e493027e3de6d53c991b5002d7ff95083c99e11dd5ac0"}, + {file = "tokenizers-0.13.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:64064bd0322405c9374305ab9b4c07152a1474370327499911937fd4a76d004b"}, + {file = "tokenizers-0.13.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8c6e2ab0f2e3d939ca66aa1d596602105fe33b505cd2854a4c1717f704c51de"}, + {file = "tokenizers-0.13.3-cp37-cp37m-win32.whl", hash = "sha256:6cc29d410768f960db8677221e497226e545eaaea01aa3613fa0fdf2cc96cff4"}, + {file = "tokenizers-0.13.3-cp37-cp37m-win_amd64.whl", hash = "sha256:fc2a7fdf864554a0dacf09d32e17c0caa9afe72baf9dd7ddedc61973bae352d8"}, + {file = "tokenizers-0.13.3-cp38-cp38-macosx_10_11_x86_64.whl", hash = "sha256:8791dedba834c1fc55e5f1521be325ea3dafb381964be20684b92fdac95d79b7"}, + {file = "tokenizers-0.13.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:d607a6a13718aeb20507bdf2b96162ead5145bbbfa26788d6b833f98b31b26e1"}, + {file = "tokenizers-0.13.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3791338f809cd1bf8e4fee6b540b36822434d0c6c6bc47162448deee3f77d425"}, + {file = "tokenizers-0.13.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c2f35f30e39e6aab8716f07790f646bdc6e4a853816cc49a95ef2a9016bf9ce6"}, + {file = "tokenizers-0.13.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:310204dfed5aa797128b65d63538a9837cbdd15da2a29a77d67eefa489edda26"}, + {file = "tokenizers-0.13.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0f9b92ea052305166559f38498b3b0cae159caea712646648aaa272f7160963"}, + {file = "tokenizers-0.13.3-cp38-cp38-win32.whl", hash = "sha256:9a3fa134896c3c1f0da6e762d15141fbff30d094067c8f1157b9fdca593b5806"}, + {file = "tokenizers-0.13.3-cp38-cp38-win_amd64.whl", hash = "sha256:8e7b0cdeace87fa9e760e6a605e0ae8fc14b7d72e9fc19c578116f7287bb873d"}, + {file = "tokenizers-0.13.3-cp39-cp39-macosx_10_11_x86_64.whl", hash = "sha256:00cee1e0859d55507e693a48fa4aef07060c4bb6bd93d80120e18fea9371c66d"}, + {file = "tokenizers-0.13.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:a23ff602d0797cea1d0506ce69b27523b07e70f6dda982ab8cf82402de839088"}, + {file = "tokenizers-0.13.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:70ce07445050b537d2696022dafb115307abdffd2a5c106f029490f84501ef97"}, + {file = "tokenizers-0.13.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:280ffe95f50eaaf655b3a1dc7ff1d9cf4777029dbbc3e63a74e65a056594abc3"}, + {file = "tokenizers-0.13.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97acfcec592f7e9de8cadcdcda50a7134423ac8455c0166b28c9ff04d227b371"}, + {file = "tokenizers-0.13.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd7730c98a3010cd4f523465867ff95cd9d6430db46676ce79358f65ae39797b"}, + {file = "tokenizers-0.13.3-cp39-cp39-win32.whl", hash = "sha256:48625a108029cb1ddf42e17a81b5a3230ba6888a70c9dc14e81bc319e812652d"}, + {file = "tokenizers-0.13.3-cp39-cp39-win_amd64.whl", hash = "sha256:bc0a6f1ba036e482db6453571c9e3e60ecd5489980ffd95d11dc9f960483d783"}, + {file = "tokenizers-0.13.3.tar.gz", hash = "sha256:2e546dbb68b623008a5442353137fbb0123d311a6d7ba52f2667c8862a75af2e"}, ] -[package.dependencies] -huggingface-hub = ">=0.16.4,<2.0" - [package.extras] -dev = ["tokenizers[testing]"] +dev = ["black (==22.3)", "datasets", "numpy", "pytest", "requests"] docs = ["setuptools-rust", "sphinx", "sphinx-rtd-theme"] -testing = ["black (==22.3)", "datasets", "numpy", "pytest", "pytest-asyncio", "requests", "ruff"] +testing = ["black (==22.3)", "datasets", "numpy", "pytest", "requests"] [[package]] name = "torch" @@ -1310,78 +1458,73 @@ telegram = ["requests"] [[package]] name = "transformers" -version = "4.56.2" +version = "4.33.3" description = "State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow" optional = false -python-versions = ">=3.9.0" +python-versions = ">=3.8.0" groups = ["main"] files = [ - {file = "transformers-4.56.2-py3-none-any.whl", hash = "sha256:79c03d0e85b26cb573c109ff9eafa96f3c8d4febfd8a0774e8bba32702dd6dde"}, - {file = "transformers-4.56.2.tar.gz", hash = "sha256:5e7c623e2d7494105c726dd10f6f90c2c99a55ebe86eef7233765abd0cb1c529"}, + {file = "transformers-4.33.3-py3-none-any.whl", hash = "sha256:7150bbf6781ddb3338ce7d74f4d6f557e6c236a0a1dd3de57412214caae7fd71"}, + {file = "transformers-4.33.3.tar.gz", hash = "sha256:8ea7c92310dee7c63b14766ce928218f7a9177960b2487ac018c91ae621af03e"}, ] [package.dependencies] filelock = "*" -huggingface-hub = ">=0.34.0,<1.0" +huggingface-hub = ">=0.15.1,<1.0" numpy = ">=1.17" packaging = ">=20.0" pyyaml = ">=5.1" regex = "!=2019.12.17" requests = "*" -safetensors = ">=0.4.3" -tokenizers = ">=0.22.0,<=0.23.0" +safetensors = ">=0.3.1" +tokenizers = ">=0.11.1,<0.11.3 || >0.11.3,<0.14" tqdm = ">=4.27" [package.extras] -accelerate = ["accelerate (>=0.26.0)"] -all = ["Pillow (>=10.0.1,<=15.0)", "accelerate (>=0.26.0)", "av", "codecarbon (>=2.8.1)", "flax (>=0.4.1,<=0.7.0)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "jinja2 (>=3.1.0)", "kenlm", "keras-nlp (>=0.3.1,<0.14.0)", "kernels (>=0.6.1,<=0.9)", "librosa", "mistral-common[opencv] (>=1.6.3)", "num2words", "onnxconverter-common", "optax (>=0.0.8,<=0.1.4)", "optuna", "phonemizer", "protobuf", "pyctcdecode (>=0.4.0)", "ray[tune] (>=2.7.0)", "scipy (<1.13.0)", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "tensorflow (>2.9,<2.16)", "tensorflow-text (<2.16)", "tf2onnx", "timm (!=1.0.18,<=1.0.19)", "tokenizers (>=0.22.0,<=0.23.0)", "torch (>=2.2)", "torchaudio", "torchvision"] +accelerate = ["accelerate (>=0.20.3)"] +agents = ["Pillow (<10.0.0)", "accelerate (>=0.20.3)", "datasets (!=2.5.0)", "diffusers", "opencv-python", "sentencepiece (>=0.1.91,!=0.1.92)", "torch (>=1.10,!=1.12.0)"] +all = ["Pillow (<10.0.0)", "accelerate (>=0.20.3)", "av (==9.2.0)", "codecarbon (==1.2.0)", "decord (==0.6.0)", "flax (>=0.4.1,<=0.7.0)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "kenlm", "keras-nlp (>=0.3.1)", "librosa", "onnxconverter-common", "optax (>=0.0.8,<=0.1.4)", "optuna", "phonemizer", "protobuf", "pyctcdecode (>=0.4.0)", "ray[tune]", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "tensorflow (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx", "timm", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "torch (>=1.10,!=1.12.0)", "torchaudio", "torchvision"] audio = ["kenlm", "librosa", "phonemizer", "pyctcdecode (>=0.4.0)"] -benchmark = ["optimum-benchmark (>=0.3.0)"] -chat-template = ["jinja2 (>=3.1.0)"] -codecarbon = ["codecarbon (>=2.8.1)"] -deepspeed = ["accelerate (>=0.26.0)", "deepspeed (>=0.9.3)"] -deepspeed-testing = ["GitPython (<3.1.19)", "accelerate (>=0.26.0)", "beautifulsoup4", "cookiecutter (==1.7.3)", "datasets (>=2.15.0)", "deepspeed (>=0.9.3)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "libcst", "mistral-common[opencv] (>=1.6.3)", "nltk (<=3.8.1)", "optuna", "parameterized (>=0.9)", "protobuf", "psutil", "pydantic (>=2)", "pytest (>=7.2.0)", "pytest-asyncio", "pytest-order", "pytest-rerunfailures", "pytest-rich", "pytest-timeout", "pytest-xdist", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (==0.11.2)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "sentencepiece (>=0.1.91,!=0.1.92)", "tensorboard", "timeout-decorator"] -dev = ["GitPython (<3.1.19)", "Pillow (>=10.0.1,<=15.0)", "accelerate (>=0.26.0)", "av", "beautifulsoup4", "codecarbon (>=2.8.1)", "cookiecutter (==1.7.3)", "datasets (>=2.15.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "flax (>=0.4.1,<=0.7.0)", "fugashi (>=1.0)", "ipadic (>=1.0.0,<2.0)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "jinja2 (>=3.1.0)", "kenlm", "keras-nlp (>=0.3.1,<0.14.0)", "kernels (>=0.6.1,<=0.9)", "libcst", "librosa", "mistral-common[opencv] (>=1.6.3)", "nltk (<=3.8.1)", "num2words", "onnxconverter-common", "optax (>=0.0.8,<=0.1.4)", "optuna", "pandas (<2.3.0)", "parameterized (>=0.9)", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pydantic (>=2)", "pytest (>=7.2.0)", "pytest-asyncio", "pytest-order", "pytest-rerunfailures", "pytest-rich", "pytest-timeout", "pytest-xdist", "ray[tune] (>=2.7.0)", "rhoknp (>=1.1.0,<1.3.1)", "rich", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (==0.11.2)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "scipy (<1.13.0)", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "sudachidict-core (>=20220729)", "sudachipy (>=0.6.6)", "tensorboard", "tensorflow (>2.9,<2.16)", "tensorflow-text (<2.16)", "tf2onnx", "timeout-decorator", "timm (!=1.0.18,<=1.0.19)", "tokenizers (>=0.22.0,<=0.23.0)", "torch (>=2.2)", "torchaudio", "torchvision", "unidic (>=1.0.2)", "unidic-lite (>=1.0.7)", "urllib3 (<2.0.0)"] -dev-tensorflow = ["GitPython (<3.1.19)", "Pillow (>=10.0.1,<=15.0)", "beautifulsoup4", "cookiecutter (==1.7.3)", "datasets (>=2.15.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "kenlm", "keras-nlp (>=0.3.1,<0.14.0)", "libcst", "librosa", "mistral-common[opencv] (>=1.6.3)", "nltk (<=3.8.1)", "onnxconverter-common", "onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)", "pandas (<2.3.0)", "parameterized (>=0.9)", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pydantic (>=2)", "pytest (>=7.2.0)", "pytest-asyncio", "pytest-order", "pytest-rerunfailures", "pytest-rich", "pytest-timeout", "pytest-xdist", "rich", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (==0.11.2)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "sentencepiece (>=0.1.91,!=0.1.92)", "tensorboard", "tensorflow (>2.9,<2.16)", "tensorflow-text (<2.16)", "tf2onnx", "timeout-decorator", "tokenizers (>=0.22.0,<=0.23.0)", "urllib3 (<2.0.0)"] -dev-torch = ["GitPython (<3.1.19)", "Pillow (>=10.0.1,<=15.0)", "accelerate (>=0.26.0)", "beautifulsoup4", "codecarbon (>=2.8.1)", "cookiecutter (==1.7.3)", "datasets (>=2.15.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "fugashi (>=1.0)", "ipadic (>=1.0.0,<2.0)", "kenlm", "kernels (>=0.6.1,<=0.9)", "libcst", "librosa", "mistral-common[opencv] (>=1.6.3)", "nltk (<=3.8.1)", "num2words", "onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)", "optuna", "pandas (<2.3.0)", "parameterized (>=0.9)", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pydantic (>=2)", "pytest (>=7.2.0)", "pytest-asyncio", "pytest-order", "pytest-rerunfailures", "pytest-rich", "pytest-timeout", "pytest-xdist", "ray[tune] (>=2.7.0)", "rhoknp (>=1.1.0,<1.3.1)", "rich", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (==0.11.2)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "sudachidict-core (>=20220729)", "sudachipy (>=0.6.6)", "tensorboard", "timeout-decorator", "timm (!=1.0.18,<=1.0.19)", "tokenizers (>=0.22.0,<=0.23.0)", "torch (>=2.2)", "torchaudio", "torchvision", "unidic (>=1.0.2)", "unidic-lite (>=1.0.7)", "urllib3 (<2.0.0)"] -flax = ["flax (>=0.4.1,<=0.7.0)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "optax (>=0.0.8,<=0.1.4)", "scipy (<1.13.0)"] +codecarbon = ["codecarbon (==1.2.0)"] +deepspeed = ["accelerate (>=0.20.3)", "deepspeed (>=0.9.3)"] +deepspeed-testing = ["GitPython (<3.1.19)", "accelerate (>=0.20.3)", "beautifulsoup4", "black (>=23.1,<24.0)", "cookiecutter (==1.7.3)", "datasets (!=2.5.0)", "deepspeed (>=0.9.3)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "hf-doc-builder (>=0.3.0)", "nltk", "optuna", "parameterized", "protobuf", "psutil", "pytest (>=7.2.0)", "pytest-timeout", "pytest-xdist", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "sentencepiece (>=0.1.91,!=0.1.92)", "timeout-decorator"] +dev = ["GitPython (<3.1.19)", "Pillow (<10.0.0)", "accelerate (>=0.20.3)", "av (==9.2.0)", "beautifulsoup4", "black (>=23.1,<24.0)", "codecarbon (==1.2.0)", "cookiecutter (==1.7.3)", "datasets (!=2.5.0)", "decord (==0.6.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "flax (>=0.4.1,<=0.7.0)", "fugashi (>=1.0)", "hf-doc-builder", "hf-doc-builder (>=0.3.0)", "ipadic (>=1.0.0,<2.0)", "isort (>=5.5.4)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "kenlm", "keras-nlp (>=0.3.1)", "librosa", "nltk", "onnxconverter-common", "optax (>=0.0.8,<=0.1.4)", "optuna", "parameterized", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pytest (>=7.2.0)", "pytest-timeout", "pytest-xdist", "ray[tune]", "rhoknp (>=1.1.0,<1.3.1)", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (>=0.0.241,<=0.0.259)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "sudachidict-core (>=20220729)", "sudachipy (>=0.6.6)", "tensorflow (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx", "timeout-decorator", "timm", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "torch (>=1.10,!=1.12.0)", "torchaudio", "torchvision", "unidic (>=1.0.2)", "unidic-lite (>=1.0.7)", "urllib3 (<2.0.0)"] +dev-tensorflow = ["GitPython (<3.1.19)", "Pillow (<10.0.0)", "beautifulsoup4", "black (>=23.1,<24.0)", "cookiecutter (==1.7.3)", "datasets (!=2.5.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "hf-doc-builder", "hf-doc-builder (>=0.3.0)", "isort (>=5.5.4)", "kenlm", "keras-nlp (>=0.3.1)", "librosa", "nltk", "onnxconverter-common", "onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)", "parameterized", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pytest (>=7.2.0)", "pytest-timeout", "pytest-xdist", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (>=0.0.241,<=0.0.259)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "sentencepiece (>=0.1.91,!=0.1.92)", "tensorflow (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx", "timeout-decorator", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "urllib3 (<2.0.0)"] +dev-torch = ["GitPython (<3.1.19)", "Pillow (<10.0.0)", "accelerate (>=0.20.3)", "beautifulsoup4", "black (>=23.1,<24.0)", "codecarbon (==1.2.0)", "cookiecutter (==1.7.3)", "datasets (!=2.5.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "fugashi (>=1.0)", "hf-doc-builder", "hf-doc-builder (>=0.3.0)", "ipadic (>=1.0.0,<2.0)", "isort (>=5.5.4)", "kenlm", "librosa", "nltk", "onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)", "optuna", "parameterized", "phonemizer", "protobuf", "psutil", "pyctcdecode (>=0.4.0)", "pytest (>=7.2.0)", "pytest-timeout", "pytest-xdist", "ray[tune]", "rhoknp (>=1.1.0,<1.3.1)", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (>=0.0.241,<=0.0.259)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "scikit-learn", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "sudachidict-core (>=20220729)", "sudachipy (>=0.6.6)", "timeout-decorator", "timm", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "torch (>=1.10,!=1.12.0)", "torchaudio", "torchvision", "unidic (>=1.0.2)", "unidic-lite (>=1.0.7)", "urllib3 (<2.0.0)"] +docs = ["Pillow (<10.0.0)", "accelerate (>=0.20.3)", "av (==9.2.0)", "codecarbon (==1.2.0)", "decord (==0.6.0)", "flax (>=0.4.1,<=0.7.0)", "hf-doc-builder", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "kenlm", "keras-nlp (>=0.3.1)", "librosa", "onnxconverter-common", "optax (>=0.0.8,<=0.1.4)", "optuna", "phonemizer", "protobuf", "pyctcdecode (>=0.4.0)", "ray[tune]", "sentencepiece (>=0.1.91,!=0.1.92)", "sigopt", "tensorflow (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx", "timm", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "torch (>=1.10,!=1.12.0)", "torchaudio", "torchvision"] +docs-specific = ["hf-doc-builder"] +fairscale = ["fairscale (>0.3)"] +flax = ["flax (>=0.4.1,<=0.7.0)", "jax (>=0.4.1,<=0.4.13)", "jaxlib (>=0.4.1,<=0.4.13)", "optax (>=0.0.8,<=0.1.4)"] flax-speech = ["kenlm", "librosa", "phonemizer", "pyctcdecode (>=0.4.0)"] ftfy = ["ftfy"] -hf-xet = ["hf-xet"] -hub-kernels = ["kernels (>=0.6.1,<=0.9)"] -integrations = ["kernels (>=0.6.1,<=0.9)", "optuna", "ray[tune] (>=2.7.0)", "sigopt"] +integrations = ["optuna", "ray[tune]", "sigopt"] ja = ["fugashi (>=1.0)", "ipadic (>=1.0.0,<2.0)", "rhoknp (>=1.1.0,<1.3.1)", "sudachidict-core (>=20220729)", "sudachipy (>=0.6.6)", "unidic (>=1.0.2)", "unidic-lite (>=1.0.7)"] -mistral-common = ["mistral-common[opencv] (>=1.6.3)"] modelcreation = ["cookiecutter (==1.7.3)"] -natten = ["natten (>=0.14.6,<0.15.0)"] -num2words = ["num2words"] +natten = ["natten (>=0.14.6)"] onnx = ["onnxconverter-common", "onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)", "tf2onnx"] onnxruntime = ["onnxruntime (>=1.4.0)", "onnxruntime-tools (>=1.4.2)"] -open-telemetry = ["opentelemetry-api", "opentelemetry-exporter-otlp", "opentelemetry-sdk"] optuna = ["optuna"] -quality = ["GitPython (<3.1.19)", "datasets (>=2.15.0)", "libcst", "pandas (<2.3.0)", "rich", "ruff (==0.11.2)", "urllib3 (<2.0.0)"] -ray = ["ray[tune] (>=2.7.0)"] -retrieval = ["datasets (>=2.15.0)", "faiss-cpu"] -ruff = ["ruff (==0.11.2)"] +quality = ["GitPython (<3.1.19)", "black (>=23.1,<24.0)", "datasets (!=2.5.0)", "hf-doc-builder (>=0.3.0)", "isort (>=5.5.4)", "ruff (>=0.0.241,<=0.0.259)", "urllib3 (<2.0.0)"] +ray = ["ray[tune]"] +retrieval = ["datasets (!=2.5.0)", "faiss-cpu"] sagemaker = ["sagemaker (>=2.31.0)"] sentencepiece = ["protobuf", "sentencepiece (>=0.1.91,!=0.1.92)"] -serving = ["accelerate (>=0.26.0)", "fastapi", "openai (>=1.98.0)", "pydantic (>=2)", "starlette", "torch (>=2.2)", "uvicorn"] +serving = ["fastapi", "pydantic (<2)", "starlette", "uvicorn"] sigopt = ["sigopt"] sklearn = ["scikit-learn"] speech = ["kenlm", "librosa", "phonemizer", "pyctcdecode (>=0.4.0)", "torchaudio"] -testing = ["GitPython (<3.1.19)", "beautifulsoup4", "cookiecutter (==1.7.3)", "datasets (>=2.15.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "libcst", "mistral-common[opencv] (>=1.6.3)", "nltk (<=3.8.1)", "parameterized (>=0.9)", "psutil", "pydantic (>=2)", "pytest (>=7.2.0)", "pytest-asyncio", "pytest-order", "pytest-rerunfailures", "pytest-rich", "pytest-timeout", "pytest-xdist", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "ruff (==0.11.2)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "sentencepiece (>=0.1.91,!=0.1.92)", "tensorboard", "timeout-decorator"] -tf = ["keras-nlp (>=0.3.1,<0.14.0)", "onnxconverter-common", "tensorflow (>2.9,<2.16)", "tensorflow-text (<2.16)", "tf2onnx"] -tf-cpu = ["keras (>2.9,<2.16)", "keras-nlp (>=0.3.1,<0.14.0)", "onnxconverter-common", "tensorflow-cpu (>2.9,<2.16)", "tensorflow-probability (<0.24)", "tensorflow-text (<2.16)", "tf2onnx"] +testing = ["GitPython (<3.1.19)", "beautifulsoup4", "black (>=23.1,<24.0)", "cookiecutter (==1.7.3)", "datasets (!=2.5.0)", "dill (<0.3.5)", "evaluate (>=0.2.0)", "faiss-cpu", "hf-doc-builder (>=0.3.0)", "nltk", "parameterized", "protobuf", "psutil", "pytest (>=7.2.0)", "pytest-timeout", "pytest-xdist", "rjieba", "rouge-score (!=0.0.7,!=0.0.8,!=0.1,!=0.1.1)", "sacrebleu (>=1.4.12,<2.0.0)", "sacremoses", "timeout-decorator"] +tf = ["keras-nlp (>=0.3.1)", "onnxconverter-common", "tensorflow (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx"] +tf-cpu = ["keras-nlp (>=0.3.1)", "onnxconverter-common", "tensorflow-cpu (>=2.6,<2.15)", "tensorflow-text (<2.15)", "tf2onnx"] tf-speech = ["kenlm", "librosa", "phonemizer", "pyctcdecode (>=0.4.0)"] -tiktoken = ["blobfile", "tiktoken"] -timm = ["timm (!=1.0.18,<=1.0.19)"] -tokenizers = ["tokenizers (>=0.22.0,<=0.23.0)"] -torch = ["accelerate (>=0.26.0)", "torch (>=2.2)"] +timm = ["timm"] +tokenizers = ["tokenizers (>=0.11.1,!=0.11.3,<0.14)"] +torch = ["accelerate (>=0.20.3)", "torch (>=1.10,!=1.12.0)"] torch-speech = ["kenlm", "librosa", "phonemizer", "pyctcdecode (>=0.4.0)", "torchaudio"] -torch-vision = ["Pillow (>=10.0.1,<=15.0)", "torchvision"] -torchhub = ["filelock", "huggingface-hub (>=0.34.0,<1.0)", "importlib-metadata", "numpy (>=1.17)", "packaging (>=20.0)", "protobuf", "regex (!=2019.12.17)", "requests", "sentencepiece (>=0.1.91,!=0.1.92)", "tokenizers (>=0.22.0,<=0.23.0)", "torch (>=2.2)", "tqdm (>=4.27)"] -video = ["av"] -vision = ["Pillow (>=10.0.1,<=15.0)"] +torch-vision = ["Pillow (<10.0.0)", "torchvision"] +torchhub = ["filelock", "huggingface-hub (>=0.15.1,<1.0)", "importlib-metadata", "numpy (>=1.17)", "packaging (>=20.0)", "protobuf", "regex (!=2019.12.17)", "requests", "sentencepiece (>=0.1.91,!=0.1.92)", "tokenizers (>=0.11.1,!=0.11.3,<0.14)", "torch (>=1.10,!=1.12.0)", "tqdm (>=4.27)"] +video = ["av (==9.2.0)", "decord (==0.6.0)"] +vision = ["Pillow (<10.0.0)"] [[package]] name = "triton" @@ -1420,6 +1563,21 @@ files = [ {file = "typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466"}, ] +[[package]] +name = "typing-inspection" +version = "0.4.2" +description = "Runtime typing introspection tools" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7"}, + {file = "typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464"}, +] + +[package.dependencies] +typing-extensions = ">=4.12.0" + [[package]] name = "urllib3" version = "2.5.0" @@ -1438,7 +1596,284 @@ h2 = ["h2 (>=4,<5)"] socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] zstd = ["zstandard (>=0.18.0)"] +[[package]] +name = "uvicorn" +version = "0.24.0.post1" +description = "The lightning-fast ASGI server." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "uvicorn-0.24.0.post1-py3-none-any.whl", hash = "sha256:7c84fea70c619d4a710153482c0d230929af7bcf76c7bfa6de151f0a3a80121e"}, + {file = "uvicorn-0.24.0.post1.tar.gz", hash = "sha256:09c8e5a79dc466bdf28dead50093957db184de356fcdc48697bad3bde4c2588e"}, +] + +[package.dependencies] +click = ">=7.0" +colorama = {version = ">=0.4", optional = true, markers = "sys_platform == \"win32\" and extra == \"standard\""} +h11 = ">=0.8" +httptools = {version = ">=0.5.0", optional = true, markers = "extra == \"standard\""} +python-dotenv = {version = ">=0.13", optional = true, markers = "extra == \"standard\""} +pyyaml = {version = ">=5.1", optional = true, markers = "extra == \"standard\""} +uvloop = {version = ">=0.14.0,<0.15.0 || >0.15.0,<0.15.1 || >0.15.1", optional = true, markers = "sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\" and extra == \"standard\""} +watchfiles = {version = ">=0.13", optional = true, markers = "extra == \"standard\""} +websockets = {version = ">=10.4", optional = true, markers = "extra == \"standard\""} + +[package.extras] +standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.5.0)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"] + +[[package]] +name = "uvloop" +version = "0.21.0" +description = "Fast implementation of asyncio event loop on top of libuv" +optional = false +python-versions = ">=3.8.0" +groups = ["main"] +markers = "sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"" +files = [ + {file = "uvloop-0.21.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ec7e6b09a6fdded42403182ab6b832b71f4edaf7f37a9a0e371a01db5f0cb45f"}, + {file = "uvloop-0.21.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:196274f2adb9689a289ad7d65700d37df0c0930fd8e4e743fa4834e850d7719d"}, + {file = "uvloop-0.21.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f38b2e090258d051d68a5b14d1da7203a3c3677321cf32a95a6f4db4dd8b6f26"}, + {file = "uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87c43e0f13022b998eb9b973b5e97200c8b90823454d4bc06ab33829e09fb9bb"}, + {file = "uvloop-0.21.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:10d66943def5fcb6e7b37310eb6b5639fd2ccbc38df1177262b0640c3ca68c1f"}, + {file = "uvloop-0.21.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:67dd654b8ca23aed0a8e99010b4c34aca62f4b7fce88f39d452ed7622c94845c"}, + {file = "uvloop-0.21.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c0f3fa6200b3108919f8bdabb9a7f87f20e7097ea3c543754cabc7d717d95cf8"}, + {file = "uvloop-0.21.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0878c2640cf341b269b7e128b1a5fed890adc4455513ca710d77d5e93aa6d6a0"}, + {file = "uvloop-0.21.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9fb766bb57b7388745d8bcc53a359b116b8a04c83a2288069809d2b3466c37e"}, + {file = "uvloop-0.21.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a375441696e2eda1c43c44ccb66e04d61ceeffcd76e4929e527b7fa401b90fb"}, + {file = "uvloop-0.21.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:baa0e6291d91649c6ba4ed4b2f982f9fa165b5bbd50a9e203c416a2797bab3c6"}, + {file = "uvloop-0.21.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4509360fcc4c3bd2c70d87573ad472de40c13387f5fda8cb58350a1d7475e58d"}, + {file = "uvloop-0.21.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:359ec2c888397b9e592a889c4d72ba3d6befba8b2bb01743f72fffbde663b59c"}, + {file = "uvloop-0.21.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f7089d2dc73179ce5ac255bdf37c236a9f914b264825fdaacaded6990a7fb4c2"}, + {file = "uvloop-0.21.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:baa4dcdbd9ae0a372f2167a207cd98c9f9a1ea1188a8a526431eef2f8116cc8d"}, + {file = "uvloop-0.21.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:86975dca1c773a2c9864f4c52c5a55631038e387b47eaf56210f873887b6c8dc"}, + {file = "uvloop-0.21.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:461d9ae6660fbbafedd07559c6a2e57cd553b34b0065b6550685f6653a98c1cb"}, + {file = "uvloop-0.21.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:183aef7c8730e54c9a3ee3227464daed66e37ba13040bb3f350bc2ddc040f22f"}, + {file = "uvloop-0.21.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:bfd55dfcc2a512316e65f16e503e9e450cab148ef11df4e4e679b5e8253a5281"}, + {file = "uvloop-0.21.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:787ae31ad8a2856fc4e7c095341cccc7209bd657d0e71ad0dc2ea83c4a6fa8af"}, + {file = "uvloop-0.21.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ee4d4ef48036ff6e5cfffb09dd192c7a5027153948d85b8da7ff705065bacc6"}, + {file = "uvloop-0.21.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3df876acd7ec037a3d005b3ab85a7e4110422e4d9c1571d4fc89b0fc41b6816"}, + {file = "uvloop-0.21.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bd53ecc9a0f3d87ab847503c2e1552b690362e005ab54e8a48ba97da3924c0dc"}, + {file = "uvloop-0.21.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a5c39f217ab3c663dc699c04cbd50c13813e31d917642d459fdcec07555cc553"}, + {file = "uvloop-0.21.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:17df489689befc72c39a08359efac29bbee8eee5209650d4b9f34df73d22e414"}, + {file = "uvloop-0.21.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc09f0ff191e61c2d592a752423c767b4ebb2986daa9ed62908e2b1b9a9ae206"}, + {file = "uvloop-0.21.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0ce1b49560b1d2d8a2977e3ba4afb2414fb46b86a1b64056bc4ab929efdafbe"}, + {file = "uvloop-0.21.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e678ad6fe52af2c58d2ae3c73dc85524ba8abe637f134bf3564ed07f555c5e79"}, + {file = "uvloop-0.21.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:460def4412e473896ef179a1671b40c039c7012184b627898eea5072ef6f017a"}, + {file = "uvloop-0.21.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:10da8046cc4a8f12c91a1c39d1dd1585c41162a15caaef165c2174db9ef18bdc"}, + {file = "uvloop-0.21.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c097078b8031190c934ed0ebfee8cc5f9ba9642e6eb88322b9958b649750f72b"}, + {file = "uvloop-0.21.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:46923b0b5ee7fc0020bef24afe7836cb068f5050ca04caf6b487c513dc1a20b2"}, + {file = "uvloop-0.21.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:53e420a3afe22cdcf2a0f4846e377d16e718bc70103d7088a4f7623567ba5fb0"}, + {file = "uvloop-0.21.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:88cb67cdbc0e483da00af0b2c3cdad4b7c61ceb1ee0f33fe00e09c81e3a6cb75"}, + {file = "uvloop-0.21.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:221f4f2a1f46032b403bf3be628011caf75428ee3cc204a22addf96f586b19fd"}, + {file = "uvloop-0.21.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2d1f581393673ce119355d56da84fe1dd9d2bb8b3d13ce792524e1607139feff"}, + {file = "uvloop-0.21.0.tar.gz", hash = "sha256:3bf12b0fda68447806a7ad847bfa591613177275d35b6724b1ee573faa3704e3"}, +] + +[package.extras] +dev = ["Cython (>=3.0,<4.0)", "setuptools (>=60)"] +docs = ["Sphinx (>=4.1.2,<4.2.0)", "sphinx-rtd-theme (>=0.5.2,<0.6.0)", "sphinxcontrib-asyncio (>=0.3.0,<0.4.0)"] +test = ["aiohttp (>=3.10.5)", "flake8 (>=5.0,<6.0)", "mypy (>=0.800)", "psutil", "pyOpenSSL (>=23.0.0,<23.1.0)", "pycodestyle (>=2.9.0,<2.10.0)"] + +[[package]] +name = "watchfiles" +version = "1.1.0" +description = "Simple, modern and high performance file watching and code reload in python." +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "watchfiles-1.1.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:27f30e14aa1c1e91cb653f03a63445739919aef84c8d2517997a83155e7a2fcc"}, + {file = "watchfiles-1.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:3366f56c272232860ab45c77c3ca7b74ee819c8e1f6f35a7125556b198bbc6df"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8412eacef34cae2836d891836a7fff7b754d6bcac61f6c12ba5ca9bc7e427b68"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:df670918eb7dd719642e05979fc84704af913d563fd17ed636f7c4783003fdcc"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d7642b9bc4827b5518ebdb3b82698ada8c14c7661ddec5fe719f3e56ccd13c97"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:199207b2d3eeaeb80ef4411875a6243d9ad8bc35b07fc42daa6b801cc39cc41c"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a479466da6db5c1e8754caee6c262cd373e6e6c363172d74394f4bff3d84d7b5"}, + {file = "watchfiles-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:935f9edd022ec13e447e5723a7d14456c8af254544cefbc533f6dd276c9aa0d9"}, + {file = "watchfiles-1.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8076a5769d6bdf5f673a19d51da05fc79e2bbf25e9fe755c47595785c06a8c72"}, + {file = "watchfiles-1.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:86b1e28d4c37e89220e924305cd9f82866bb0ace666943a6e4196c5df4d58dcc"}, + {file = "watchfiles-1.1.0-cp310-cp310-win32.whl", hash = "sha256:d1caf40c1c657b27858f9774d5c0e232089bca9cb8ee17ce7478c6e9264d2587"}, + {file = "watchfiles-1.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:a89c75a5b9bc329131115a409d0acc16e8da8dfd5867ba59f1dd66ae7ea8fa82"}, + {file = "watchfiles-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:c9649dfc57cc1f9835551deb17689e8d44666315f2e82d337b9f07bd76ae3aa2"}, + {file = "watchfiles-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:406520216186b99374cdb58bc48e34bb74535adec160c8459894884c983a149c"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb45350fd1dc75cd68d3d72c47f5b513cb0578da716df5fba02fff31c69d5f2d"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:11ee4444250fcbeb47459a877e5e80ed994ce8e8d20283857fc128be1715dac7"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bda8136e6a80bdea23e5e74e09df0362744d24ffb8cd59c4a95a6ce3d142f79c"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b915daeb2d8c1f5cee4b970f2e2c988ce6514aace3c9296e58dd64dc9aa5d575"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ed8fc66786de8d0376f9f913c09e963c66e90ced9aa11997f93bdb30f7c872a8"}, + {file = "watchfiles-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fe4371595edf78c41ef8ac8df20df3943e13defd0efcb732b2e393b5a8a7a71f"}, + {file = "watchfiles-1.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b7c5f6fe273291f4d414d55b2c80d33c457b8a42677ad14b4b47ff025d0893e4"}, + {file = "watchfiles-1.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7738027989881e70e3723c75921f1efa45225084228788fc59ea8c6d732eb30d"}, + {file = "watchfiles-1.1.0-cp311-cp311-win32.whl", hash = "sha256:622d6b2c06be19f6e89b1d951485a232e3b59618def88dbeda575ed8f0d8dbf2"}, + {file = "watchfiles-1.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:48aa25e5992b61debc908a61ab4d3f216b64f44fdaa71eb082d8b2de846b7d12"}, + {file = "watchfiles-1.1.0-cp311-cp311-win_arm64.whl", hash = "sha256:00645eb79a3faa70d9cb15c8d4187bb72970b2470e938670240c7998dad9f13a"}, + {file = "watchfiles-1.1.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9dc001c3e10de4725c749d4c2f2bdc6ae24de5a88a339c4bce32300a31ede179"}, + {file = "watchfiles-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d9ba68ec283153dead62cbe81872d28e053745f12335d037de9cbd14bd1877f5"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:130fc497b8ee68dce163e4254d9b0356411d1490e868bd8790028bc46c5cc297"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:50a51a90610d0845a5931a780d8e51d7bd7f309ebc25132ba975aca016b576a0"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc44678a72ac0910bac46fa6a0de6af9ba1355669b3dfaf1ce5f05ca7a74364e"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a543492513a93b001975ae283a51f4b67973662a375a403ae82f420d2c7205ee"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ac164e20d17cc285f2b94dc31c384bc3aa3dd5e7490473b3db043dd70fbccfd"}, + {file = "watchfiles-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7590d5a455321e53857892ab8879dce62d1f4b04748769f5adf2e707afb9d4f"}, + {file = "watchfiles-1.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:37d3d3f7defb13f62ece99e9be912afe9dd8a0077b7c45ee5a57c74811d581a4"}, + {file = "watchfiles-1.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:7080c4bb3efd70a07b1cc2df99a7aa51d98685be56be6038c3169199d0a1c69f"}, + {file = "watchfiles-1.1.0-cp312-cp312-win32.whl", hash = "sha256:cbcf8630ef4afb05dc30107bfa17f16c0896bb30ee48fc24bf64c1f970f3b1fd"}, + {file = "watchfiles-1.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:cbd949bdd87567b0ad183d7676feb98136cde5bb9025403794a4c0db28ed3a47"}, + {file = "watchfiles-1.1.0-cp312-cp312-win_arm64.whl", hash = "sha256:0a7d40b77f07be87c6faa93d0951a0fcd8cbca1ddff60a1b65d741bac6f3a9f6"}, + {file = "watchfiles-1.1.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:5007f860c7f1f8df471e4e04aaa8c43673429047d63205d1630880f7637bca30"}, + {file = "watchfiles-1.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:20ecc8abbd957046f1fe9562757903f5eaf57c3bce70929fda6c7711bb58074a"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2f0498b7d2a3c072766dba3274fe22a183dbea1f99d188f1c6c72209a1063dc"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:239736577e848678e13b201bba14e89718f5c2133dfd6b1f7846fa1b58a8532b"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eff4b8d89f444f7e49136dc695599a591ff769300734446c0a86cba2eb2f9895"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12b0a02a91762c08f7264e2e79542f76870c3040bbc847fb67410ab81474932a"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:29e7bc2eee15cbb339c68445959108803dc14ee0c7b4eea556400131a8de462b"}, + {file = "watchfiles-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9481174d3ed982e269c090f780122fb59cee6c3796f74efe74e70f7780ed94c"}, + {file = "watchfiles-1.1.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:80f811146831c8c86ab17b640801c25dc0a88c630e855e2bef3568f30434d52b"}, + {file = "watchfiles-1.1.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:60022527e71d1d1fda67a33150ee42869042bce3d0fcc9cc49be009a9cded3fb"}, + {file = "watchfiles-1.1.0-cp313-cp313-win32.whl", hash = "sha256:32d6d4e583593cb8576e129879ea0991660b935177c0f93c6681359b3654bfa9"}, + {file = "watchfiles-1.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:f21af781a4a6fbad54f03c598ab620e3a77032c5878f3d780448421a6e1818c7"}, + {file = "watchfiles-1.1.0-cp313-cp313-win_arm64.whl", hash = "sha256:5366164391873ed76bfdf618818c82084c9db7fac82b64a20c44d335eec9ced5"}, + {file = "watchfiles-1.1.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:17ab167cca6339c2b830b744eaf10803d2a5b6683be4d79d8475d88b4a8a4be1"}, + {file = "watchfiles-1.1.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:328dbc9bff7205c215a7807da7c18dce37da7da718e798356212d22696404339"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f7208ab6e009c627b7557ce55c465c98967e8caa8b11833531fdf95799372633"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a8f6f72974a19efead54195bc9bed4d850fc047bb7aa971268fd9a8387c89011"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d181ef50923c29cf0450c3cd47e2f0557b62218c50b2ab8ce2ecaa02bd97e670"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:adb4167043d3a78280d5d05ce0ba22055c266cf8655ce942f2fb881262ff3cdf"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8c5701dc474b041e2934a26d31d39f90fac8a3dee2322b39f7729867f932b1d4"}, + {file = "watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b067915e3c3936966a8607f6fe5487df0c9c4afb85226613b520890049deea20"}, + {file = "watchfiles-1.1.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:9c733cda03b6d636b4219625a4acb5c6ffb10803338e437fb614fef9516825ef"}, + {file = "watchfiles-1.1.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:cc08ef8b90d78bfac66f0def80240b0197008e4852c9f285907377b2947ffdcb"}, + {file = "watchfiles-1.1.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:9974d2f7dc561cce3bb88dfa8eb309dab64c729de85fba32e98d75cf24b66297"}, + {file = "watchfiles-1.1.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c68e9f1fcb4d43798ad8814c4c1b61547b014b667216cb754e606bfade587018"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95ab1594377effac17110e1352989bdd7bdfca9ff0e5eeccd8c69c5389b826d0"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fba9b62da882c1be1280a7584ec4515d0a6006a94d6e5819730ec2eab60ffe12"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3434e401f3ce0ed6b42569128b3d1e3af773d7ec18751b918b89cd49c14eaafb"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fa257a4d0d21fcbca5b5fcba9dca5a78011cb93c0323fb8855c6d2dfbc76eb77"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7fd1b3879a578a8ec2076c7961076df540b9af317123f84569f5a9ddee64ce92"}, + {file = "watchfiles-1.1.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:62cc7a30eeb0e20ecc5f4bd113cd69dcdb745a07c68c0370cea919f373f65d9e"}, + {file = "watchfiles-1.1.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:891c69e027748b4a73847335d208e374ce54ca3c335907d381fde4e41661b13b"}, + {file = "watchfiles-1.1.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:12fe8eaffaf0faa7906895b4f8bb88264035b3f0243275e0bf24af0436b27259"}, + {file = "watchfiles-1.1.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:bfe3c517c283e484843cb2e357dd57ba009cff351edf45fb455b5fbd1f45b15f"}, + {file = "watchfiles-1.1.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a9ccbf1f129480ed3044f540c0fdbc4ee556f7175e5ab40fe077ff6baf286d4e"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba0e3255b0396cac3cc7bbace76404dd72b5438bf0d8e7cefa2f79a7f3649caa"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4281cd9fce9fc0a9dbf0fc1217f39bf9cf2b4d315d9626ef1d4e87b84699e7e8"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6d2404af8db1329f9a3c9b79ff63e0ae7131986446901582067d9304ae8aaf7f"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e78b6ed8165996013165eeabd875c5dfc19d41b54f94b40e9fff0eb3193e5e8e"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:249590eb75ccc117f488e2fabd1bfa33c580e24b96f00658ad88e38844a040bb"}, + {file = "watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d05686b5487cfa2e2c28ff1aa370ea3e6c5accfe6435944ddea1e10d93872147"}, + {file = "watchfiles-1.1.0-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:d0e10e6f8f6dc5762adee7dece33b722282e1f59aa6a55da5d493a97282fedd8"}, + {file = "watchfiles-1.1.0-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:af06c863f152005c7592df1d6a7009c836a247c9d8adb78fef8575a5a98699db"}, + {file = "watchfiles-1.1.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:865c8e95713744cf5ae261f3067861e9da5f1370ba91fc536431e29b418676fa"}, + {file = "watchfiles-1.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:42f92befc848bb7a19658f21f3e7bae80d7d005d13891c62c2cd4d4d0abb3433"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa0cc8365ab29487eb4f9979fd41b22549853389e22d5de3f134a6796e1b05a4"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:90ebb429e933645f3da534c89b29b665e285048973b4d2b6946526888c3eb2c7"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c588c45da9b08ab3da81d08d7987dae6d2a3badd63acdb3e206a42dbfa7cb76f"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7c55b0f9f68590115c25272b06e63f0824f03d4fc7d6deed43d8ad5660cabdbf"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd17a1e489f02ce9117b0de3c0b1fab1c3e2eedc82311b299ee6b6faf6c23a29"}, + {file = "watchfiles-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da71945c9ace018d8634822f16cbc2a78323ef6c876b1d34bbf5d5222fd6a72e"}, + {file = "watchfiles-1.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:51556d5004887045dba3acdd1fdf61dddea2be0a7e18048b5e853dcd37149b86"}, + {file = "watchfiles-1.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04e4ed5d1cd3eae68c89bcc1a485a109f39f2fd8de05f705e98af6b5f1861f1f"}, + {file = "watchfiles-1.1.0-cp39-cp39-win32.whl", hash = "sha256:c600e85f2ffd9f1035222b1a312aff85fd11ea39baff1d705b9b047aad2ce267"}, + {file = "watchfiles-1.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:3aba215958d88182e8d2acba0fdaf687745180974946609119953c0e112397dc"}, + {file = "watchfiles-1.1.0-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3a6fd40bbb50d24976eb275ccb55cd1951dfb63dbc27cae3066a6ca5f4beabd5"}, + {file = "watchfiles-1.1.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9f811079d2f9795b5d48b55a37aa7773680a5659afe34b54cc1d86590a51507d"}, + {file = "watchfiles-1.1.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2726d7bfd9f76158c84c10a409b77a320426540df8c35be172444394b17f7ea"}, + {file = "watchfiles-1.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:df32d59cb9780f66d165a9a7a26f19df2c7d24e3bd58713108b41d0ff4f929c6"}, + {file = "watchfiles-1.1.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:0ece16b563b17ab26eaa2d52230c9a7ae46cf01759621f4fbbca280e438267b3"}, + {file = "watchfiles-1.1.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:51b81e55d40c4b4aa8658427a3ee7ea847c591ae9e8b81ef94a90b668999353c"}, + {file = "watchfiles-1.1.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2bcdc54ea267fe72bfc7d83c041e4eb58d7d8dc6f578dfddb52f037ce62f432"}, + {file = "watchfiles-1.1.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:923fec6e5461c42bd7e3fd5ec37492c6f3468be0499bc0707b4bbbc16ac21792"}, + {file = "watchfiles-1.1.0-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7b3443f4ec3ba5aa00b0e9fa90cf31d98321cbff8b925a7c7b84161619870bc9"}, + {file = "watchfiles-1.1.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:7049e52167fc75fc3cc418fc13d39a8e520cbb60ca08b47f6cedb85e181d2f2a"}, + {file = "watchfiles-1.1.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:54062ef956807ba806559b3c3d52105ae1827a0d4ab47b621b31132b6b7e2866"}, + {file = "watchfiles-1.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a7bd57a1bb02f9d5c398c0c1675384e7ab1dd39da0ca50b7f09af45fa435277"}, + {file = "watchfiles-1.1.0.tar.gz", hash = "sha256:693ed7ec72cbfcee399e92c895362b6e66d63dac6b91e2c11ae03d10d503e575"}, +] + +[package.dependencies] +anyio = ">=3.0.0" + +[[package]] +name = "websockets" +version = "15.0.1" +description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b"}, + {file = "websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205"}, + {file = "websockets-15.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5756779642579d902eed757b21b0164cd6fe338506a8083eb58af5c372e39d9a"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdfe3e2a29e4db3659dbd5bbf04560cea53dd9610273917799f1cde46aa725e"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c2529b320eb9e35af0fa3016c187dffb84a3ecc572bcee7c3ce302bfeba52bf"}, + {file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac1e5c9054fe23226fb11e05a6e630837f074174c4c2f0fe442996112a6de4fb"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5df592cd503496351d6dc14f7cdad49f268d8e618f80dce0cd5a36b93c3fc08d"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:0a34631031a8f05657e8e90903e656959234f3a04552259458aac0b0f9ae6fd9"}, + {file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d00075aa65772e7ce9e990cab3ff1de702aa09be3940d1dc88d5abf1ab8a09c"}, + {file = "websockets-15.0.1-cp310-cp310-win32.whl", hash = "sha256:1234d4ef35db82f5446dca8e35a7da7964d02c127b095e172e54397fb6a6c256"}, + {file = "websockets-15.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:39c1fec2c11dc8d89bba6b2bf1556af381611a173ac2b511cf7231622058af41"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57"}, + {file = "websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792"}, + {file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3"}, + {file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf"}, + {file = "websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85"}, + {file = "websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665"}, + {file = "websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5"}, + {file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4"}, + {file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597"}, + {file = "websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9"}, + {file = "websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675"}, + {file = "websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f"}, + {file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d"}, + {file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4"}, + {file = "websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa"}, + {file = "websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f4c04ead5aed67c8a1a20491d54cdfba5884507a48dd798ecaf13c74c4489f5"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:abdc0c6c8c648b4805c5eacd131910d2a7f6455dfd3becab248ef108e89ab16a"}, + {file = "websockets-15.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a625e06551975f4b7ea7102bc43895b90742746797e2e14b70ed61c43a90f09b"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d591f8de75824cbb7acad4e05d2d710484f15f29d4a915092675ad3456f11770"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:47819cea040f31d670cc8d324bb6435c6f133b8c7a19ec3d61634e62f8d8f9eb"}, + {file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac017dd64572e5c3bd01939121e4d16cf30e5d7e110a119399cf3133b63ad054"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4a9fac8e469d04ce6c25bb2610dc535235bd4aa14996b4e6dbebf5e007eba5ee"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:363c6f671b761efcb30608d24925a382497c12c506b51661883c3e22337265ed"}, + {file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2034693ad3097d5355bfdacfffcbd3ef5694f9718ab7f29c29689a9eae841880"}, + {file = "websockets-15.0.1-cp39-cp39-win32.whl", hash = "sha256:3b1ac0d3e594bf121308112697cf4b32be538fb1444468fb0a6ae4feebc83411"}, + {file = "websockets-15.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7643a03db5c95c799b89b31c036d5f27eeb4d259c798e878d6937d71832b1e4"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0c9e74d766f2818bb95f84c25be4dea09841ac0f734d1966f415e4edfc4ef1c3"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1009ee0c7739c08a0cd59de430d6de452a55e42d6b522de7aa15e6f67db0b8e1"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76d1f20b1c7a2fa82367e04982e708723ba0e7b8d43aa643d3dcd404d74f1475"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f29d80eb9a9263b8d109135351caf568cc3f80b9928bccde535c235de55c22d9"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b359ed09954d7c18bbc1680f380c7301f92c60bf924171629c5db97febb12f04"}, + {file = "websockets-15.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cad21560da69f4ce7658ca2cb83138fb4cf695a2ba3e475e0559e05991aa8122"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7f493881579c90fc262d9cdbaa05a6b54b3811c2f300766748db79f098db9940"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:47b099e1f4fbc95b701b6e85768e1fcdaf1630f3cbe4765fa216596f12310e2e"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67f2b6de947f8c757db2db9c71527933ad0019737ec374a8a6be9a956786aaf9"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d08eb4c2b7d6c41da6ca0600c077e93f5adcfd979cd777d747e9ee624556da4b"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b826973a4a2ae47ba357e4e82fa44a463b8f168e1ca775ac64521442b19e87f"}, + {file = "websockets-15.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:21c1fa28a6a7e3cbdc171c694398b6df4744613ce9b36b1a498e816787e28123"}, + {file = "websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f"}, + {file = "websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee"}, +] + [metadata] lock-version = "2.1" python-versions = ">=3.12,<3.14" -content-hash = "5ba2db3236ebe7fca6ed020fa130d1f5ebbccd6e350440d3d9d78646a1a6b038" +content-hash = "2f14e5d1b1ab5437ffabe5aa6d9f9e18e7fdf267c0df92ba24d302e264f006c6" diff --git a/postman/AI_Lab_API.postman_collection.json b/postman/AI_Lab_API.postman_collection.json new file mode 100644 index 0000000..e2774d6 --- /dev/null +++ b/postman/AI_Lab_API.postman_collection.json @@ -0,0 +1,613 @@ +{ + "info": { + "_postman_id": "ai-lab-api-collection", + "name": "AI Lab API - Complete Collection", + "description": "Complete Postman collection for AI Lab API with all endpoints for NLP pipelines using transformers", + "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json", + "_exporter_id": "ai-lab" + }, + "variable": [ + { + "key": "base_url", + "value": "http://localhost:8000", + "type": "string", + "description": "Base URL for AI Lab API" + } + ], + "item": [ + { + "name": "🏠 Core Endpoints", + "item": [ + { + "name": "Root - API Information", + "request": { + "method": "GET", + "header": [], + "url": { + "raw": "{{base_url}}/", + "host": ["{{base_url}}"], + "path": [""] + }, + "description": "Get API information and available endpoints" + }, + "response": [] + }, + { + "name": "Health Check", + "request": { + "method": "GET", + "header": [], + "url": { + "raw": "{{base_url}}/health", + "host": ["{{base_url}}"], + "path": ["health"] + }, + "description": "Check API health status and loaded pipelines" + }, + "response": [] + } + ], + "description": "Core API endpoints for health check and information" + }, + { + "name": "💭 Sentiment Analysis", + "item": [ + { + "name": "Analyze Sentiment - Positive", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"I absolutely love this project! It's amazing and well-designed.\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Analyze sentiment of positive text" + }, + "response": [] + }, + { + "name": "Analyze Sentiment - Negative", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"This is terrible and I hate it completely.\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Analyze sentiment of negative text" + }, + "response": [] + }, + { + "name": "Analyze Sentiment - Custom Model", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"This product is okay, nothing special.\",\n \"model_name\": \"cardiffnlp/twitter-roberta-base-sentiment-latest\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Analyze sentiment using custom model" + }, + "response": [] + }, + { + "name": "Batch Sentiment Analysis", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"texts\": [\n \"I love this!\",\n \"This is terrible.\",\n \"It's okay, nothing special.\",\n \"Amazing product, highly recommended!\",\n \"Worst experience ever.\"\n ]\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment/batch", + "host": ["{{base_url}}"], + "path": ["sentiment", "batch"] + }, + "description": "Analyze sentiment for multiple texts" + }, + "response": [] + } + ], + "description": "Sentiment analysis endpoints for analyzing emotional tone" + }, + { + "name": "🏷️ Named Entity Recognition", + "item": [ + { + "name": "Extract Entities - People & Organizations", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"Elon Musk is the CEO of Tesla and SpaceX. He was born in South Africa and now lives in California.\"\n}" + }, + "url": { + "raw": "{{base_url}}/ner", + "host": ["{{base_url}}"], + "path": ["ner"] + }, + "description": "Extract named entities from text" + }, + "response": [] + }, + { + "name": "Extract Entities - Geographic", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"The meeting will be held in Paris, France on Monday. We'll then travel to London, United Kingdom.\"\n}" + }, + "url": { + "raw": "{{base_url}}/ner", + "host": ["{{base_url}}"], + "path": ["ner"] + }, + "description": "Extract geographic entities" + }, + "response": [] + }, + { + "name": "Batch NER Processing", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"texts\": [\n \"Apple Inc. is headquartered in Cupertino, California.\",\n \"Microsoft was founded by Bill Gates and Paul Allen.\",\n \"The conference will be in Tokyo, Japan next month.\"\n ]\n}" + }, + "url": { + "raw": "{{base_url}}/ner/batch", + "host": ["{{base_url}}"], + "path": ["ner", "batch"] + }, + "description": "Extract entities from multiple texts" + }, + "response": [] + } + ], + "description": "Named Entity Recognition endpoints for extracting people, places, organizations" + }, + { + "name": "❓ Question Answering", + "item": [ + { + "name": "Simple Q&A", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"question\": \"What is the capital of France?\",\n \"context\": \"France is a country in Europe. Paris is the capital and largest city of France. The city is known for the Eiffel Tower and the Louvre Museum.\"\n}" + }, + "url": { + "raw": "{{base_url}}/qa", + "host": ["{{base_url}}"], + "path": ["qa"] + }, + "description": "Answer questions based on context" + }, + "response": [] + }, + { + "name": "Technical Q&A", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"question\": \"What programming language is mentioned?\",\n \"context\": \"FastAPI is a modern, fast web framework for building APIs with Python 3.7+. It provides automatic interactive API documentation and is built on top of Starlette and Pydantic.\"\n}" + }, + "url": { + "raw": "{{base_url}}/qa", + "host": ["{{base_url}}"], + "path": ["qa"] + }, + "description": "Answer technical questions" + }, + "response": [] + } + ], + "description": "Question Answering endpoints for extracting answers from context" + }, + { + "name": "🎭 Fill Mask", + "item": [ + { + "name": "Fill Simple Mask", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"The capital of France is [MASK].\"\n}" + }, + "url": { + "raw": "{{base_url}}/fillmask", + "host": ["{{base_url}}"], + "path": ["fillmask"] + }, + "description": "Predict masked words in sentences" + }, + "response": [] + }, + { + "name": "Fill Technical Mask", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"Python is a popular [MASK] language for machine learning.\"\n}" + }, + "url": { + "raw": "{{base_url}}/fillmask", + "host": ["{{base_url}}"], + "path": ["fillmask"] + }, + "description": "Fill technical context masks" + }, + "response": [] + }, + { + "name": "Batch Fill Mask", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"texts\": [\n \"The weather today is [MASK].\",\n \"I like to eat [MASK] for breakfast.\",\n \"The best programming language is [MASK].\"\n ]\n}" + }, + "url": { + "raw": "{{base_url}}/fillmask/batch", + "host": ["{{base_url}}"], + "path": ["fillmask", "batch"] + }, + "description": "Fill masks in multiple texts" + }, + "response": [] + } + ], + "description": "Fill Mask endpoints for predicting masked words" + }, + { + "name": "🛡️ Content Moderation", + "item": [ + { + "name": "Check Safe Content", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"This is a wonderful day and I'm feeling great!\"\n}" + }, + "url": { + "raw": "{{base_url}}/moderation", + "host": ["{{base_url}}"], + "path": ["moderation"] + }, + "description": "Check safe, non-toxic content" + }, + "response": [] + }, + { + "name": "Check Potentially Toxic Content", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"I hate everything and everyone around me!\"\n}" + }, + "url": { + "raw": "{{base_url}}/moderation", + "host": ["{{base_url}}"], + "path": ["moderation"] + }, + "description": "Check potentially toxic content" + }, + "response": [] + }, + { + "name": "Batch Content Moderation", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"texts\": [\n \"Have a great day!\",\n \"I'm so angry right now!\",\n \"Thank you for your help.\",\n \"This is completely stupid!\"\n ]\n}" + }, + "url": { + "raw": "{{base_url}}/moderation/batch", + "host": ["{{base_url}}"], + "path": ["moderation", "batch"] + }, + "description": "Moderate multiple texts for toxicity" + }, + "response": [] + } + ], + "description": "Content Moderation endpoints for detecting toxic or harmful content" + }, + { + "name": "✍️ Text Generation", + "item": [ + { + "name": "Generate Creative Text", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"Once upon a time in a magical forest\"\n}" + }, + "url": { + "raw": "{{base_url}}/textgen", + "host": ["{{base_url}}"], + "path": ["textgen"] + }, + "description": "Generate creative text from prompt" + }, + "response": [] + }, + { + "name": "Generate Technical Text", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"FastAPI is a modern Python web framework that\"\n}" + }, + "url": { + "raw": "{{base_url}}/textgen", + "host": ["{{base_url}}"], + "path": ["textgen"] + }, + "description": "Generate technical documentation text" + }, + "response": [] + }, + { + "name": "Batch Text Generation", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"texts\": [\n \"In the future, AI will\",\n \"The best way to learn programming is\",\n \"Climate change is\"\n ]\n}" + }, + "url": { + "raw": "{{base_url}}/textgen/batch", + "host": ["{{base_url}}"], + "path": ["textgen", "batch"] + }, + "description": "Generate text from multiple prompts" + }, + "response": [] + } + ], + "description": "Text Generation endpoints for creating text from prompts" + }, + { + "name": "🧪 Testing & Examples", + "item": [ + { + "name": "Complete Pipeline Test", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"AI Lab is an amazing project for learning NLP!\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Test with project-related text" + }, + "response": [] + }, + { + "name": "Error Handling Test - Empty Text", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Test error handling with empty text" + }, + "response": [] + }, + { + "name": "Error Handling Test - Invalid Model", + "request": { + "method": "POST", + "header": [ + { + "key": "Content-Type", + "value": "application/json" + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"text\": \"Test text\",\n \"model_name\": \"non-existent-model\"\n}" + }, + "url": { + "raw": "{{base_url}}/sentiment", + "host": ["{{base_url}}"], + "path": ["sentiment"] + }, + "description": "Test error handling with invalid model" + }, + "response": [] + } + ], + "description": "Testing endpoints and error handling examples" + } + ], + "event": [ + { + "listen": "prerequest", + "script": { + "type": "text/javascript", + "exec": [ + "// Pre-request script for all requests", + "console.log('Making request to: ' + pm.request.url);", + "", + "// Add timestamp to request", + "pm.globals.set('request_timestamp', new Date().toISOString());" + ] + } + }, + { + "listen": "test", + "script": { + "type": "text/javascript", + "exec": [ + "// Common tests for all requests", + "pm.test('Response time is less than 30 seconds', function () {", + " pm.expect(pm.response.responseTime).to.be.below(30000);", + "});", + "", + "pm.test('Response has Content-Type header', function () {", + " pm.expect(pm.response.headers.get('Content-Type')).to.include('application/json');", + "});", + "", + "// Log response for debugging", + "console.log('Response status:', pm.response.status);", + "console.log('Response time:', pm.response.responseTime + 'ms');" + ] + } + } + ] +} diff --git a/postman/AI_Lab_API.postman_environment.json b/postman/AI_Lab_API.postman_environment.json new file mode 100644 index 0000000..b270a0e --- /dev/null +++ b/postman/AI_Lab_API.postman_environment.json @@ -0,0 +1,63 @@ +{ + "id": "ai-lab-api-environment", + "name": "AI Lab API Environment", + "values": [ + { + "key": "base_url", + "value": "http://localhost:8000", + "type": "default", + "description": "Base URL for AI Lab API (default: localhost)" + }, + { + "key": "api_version", + "value": "1.0.0", + "type": "default", + "description": "API version" + }, + { + "key": "timeout", + "value": "30000", + "type": "default", + "description": "Request timeout in milliseconds" + }, + { + "key": "default_sentiment_model", + "value": "distilbert-base-uncased-finetuned-sst-2-english", + "type": "default", + "description": "Default sentiment analysis model" + }, + { + "key": "default_ner_model", + "value": "dslim/bert-base-NER", + "type": "default", + "description": "Default NER model" + }, + { + "key": "default_qa_model", + "value": "distilbert-base-cased-distilled-squad", + "type": "default", + "description": "Default Q&A model" + }, + { + "key": "default_fillmask_model", + "value": "bert-base-uncased", + "type": "default", + "description": "Default fill mask model" + }, + { + "key": "default_textgen_model", + "value": "gpt2", + "type": "default", + "description": "Default text generation model" + }, + { + "key": "default_moderation_model", + "value": "unitary/toxic-bert", + "type": "default", + "description": "Default content moderation model" + } + ], + "_postman_variable_scope": "environment", + "_postman_exported_at": "2024-10-12T10:00:00.000Z", + "_postman_exported_using": "Postman/10.18.0" +} diff --git a/pyproject.toml b/pyproject.toml index 20222d1..ab2496d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -18,6 +18,9 @@ transformers = "^4.30.0" tokenizers = "^0.13.0" numpy = "^1.24.0" accelerate = "^0.20.0" +fastapi = "^0.104.0" +uvicorn = { extras = ["standard"], version = "^0.24.0" } +pydantic = "^2.5.0" [tool.poetry.scripts] ai-lab = "src.main:main" diff --git a/src/api/__init__.py b/src/api/__init__.py new file mode 100644 index 0000000..e3c391c --- /dev/null +++ b/src/api/__init__.py @@ -0,0 +1,7 @@ +""" +API module for AI Lab +""" +from .models import * +from .app import app + +__all__ = ["app"] \ No newline at end of file diff --git a/src/api/app.py b/src/api/app.py new file mode 100644 index 0000000..dded0e0 --- /dev/null +++ b/src/api/app.py @@ -0,0 +1,494 @@ +""" +FastAPI application for AI Lab +""" +from fastapi import FastAPI, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from contextlib import asynccontextmanager +from typing import Dict, Any +import logging + +from .models import ( + TextRequest, TextListRequest, QARequest, FillMaskRequest, + SentimentResponse, NERResponse, QAResponse, FillMaskResponse, + ModerationResponse, TextGenResponse, BatchResponse +) + +# Global pipeline instances +pipelines: Dict[str, Any] = {} + +@asynccontextmanager +async def lifespan(app: FastAPI): + """Manage application lifespan - load models on startup""" + global pipelines + + # Load all pipelines on startup + try: + logging.info("Loading AI pipelines...") + + # Import here to avoid circular imports + from src.pipelines.sentiment import SentimentAnalyzer + from src.pipelines.ner import NamedEntityRecognizer + from src.pipelines.qa import QuestionAnsweringSystem + from src.pipelines.fillmask import FillMaskAnalyzer + from src.pipelines.moderation import ContentModerator + from src.pipelines.textgen import TextGenerator + + pipelines["sentiment"] = SentimentAnalyzer() + pipelines["ner"] = NamedEntityRecognizer() + pipelines["qa"] = QuestionAnsweringSystem() + pipelines["fillmask"] = FillMaskAnalyzer() + pipelines["moderation"] = ContentModerator() + pipelines["textgen"] = TextGenerator() + logging.info("All pipelines loaded successfully!") + except Exception as e: + logging.error(f"Error loading pipelines: {e}") + # Don't raise, just log - allows API to start without all pipelines + + yield + + # Cleanup on shutdown + pipelines.clear() + logging.info("Pipelines cleaned up") + + +# Create FastAPI app +app = FastAPI( + title="AI Lab API", + description="API for various AI/ML pipelines using transformers", + version="1.0.0", + lifespan=lifespan, + swagger_ui_parameters={ + "syntaxHighlight.theme": "obsidian", + "tryItOutEnabled": True, + "requestSnippetsEnabled": True, + "persistAuthorization": True, + "displayRequestDuration": True, + "defaultModelRendering": "model" + } +) + +# Add CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], # Configure appropriately for production + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + + +@app.get("/") +async def root(): + """Root endpoint""" + return { + "message": "Welcome to AI Lab API", + "version": "1.0.0", + "available_endpoints": [ + "/sentiment", + "/ner", + "/qa", + "/fillmask", + "/moderation", + "/textgen", + "/sentiment/batch", + "/ner/batch", + "/fillmask/batch", + "/moderation/batch", + "/textgen/batch", + "/health", + "/docs" + ] + } + + +@app.get("/health") +async def health_check(): + """Health check endpoint""" + return { + "status": "healthy", + "pipelines_loaded": len(pipelines), + "available_pipelines": list(pipelines.keys()) + } + + +@app.post("/sentiment", response_model=SentimentResponse) +async def analyze_sentiment(request: TextRequest): + """Analyze sentiment of a text""" + try: + if "sentiment" not in pipelines: + raise HTTPException(status_code=503, detail="Sentiment pipeline not available") + + # Use custom model if provided + if request.model_name: + from src.pipelines.sentiment import SentimentAnalyzer + analyzer = SentimentAnalyzer(request.model_name) + result = analyzer.analyze(request.text) + else: + result = pipelines["sentiment"].analyze(request.text) + + if "error" in result: + return SentimentResponse(success=False, text=request.text, message=result["error"]) + + return SentimentResponse( + success=True, + text=result["text"], + sentiment=result["sentiment"], + confidence=result["confidence"] + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/ner", response_model=NERResponse) +async def extract_entities(request: TextRequest): + """Extract named entities from text""" + try: + logging.info(f"NER request for text: {request.text[:50]}...") + if "ner" not in pipelines: + raise HTTPException(status_code=503, detail="NER pipeline not available") + + try: + if request.model_name: + from src.pipelines.ner import NamedEntityRecognizer + ner = NamedEntityRecognizer(request.model_name) + result = ner.recognize(request.text) + else: + result = pipelines["ner"].recognize(request.text) + except Exception as pipeline_error: + logging.error(f"Pipeline error: {str(pipeline_error)}") + return NERResponse(success=False, text=request.text, message=f"Pipeline error: {str(pipeline_error)}") + + logging.info(f"NER result keys: {list(result.keys())}") + + if "error" in result: + logging.error(f"NER error: {result['error']}") + return NERResponse(success=False, text=request.text, message=result["error"]) + + # Validate result structure + if "original_text" not in result: + logging.error(f"Missing 'original_text' in result: {result}") + return NERResponse(success=False, text=request.text, message="Invalid NER result format") + + if "entities" not in result: + logging.error(f"Missing 'entities' in result: {result}") + return NERResponse(success=False, text=request.text, message="Invalid NER result format") + + return NERResponse( + success=True, + text=result["original_text"], + entities=result["entities"] + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/qa", response_model=QAResponse) +async def answer_question(request: QARequest): + """Answer a question based on context""" + try: + if "qa" not in pipelines: + raise HTTPException(status_code=503, detail="QA pipeline not available") + + if request.model_name: + from src.pipelines.qa import QuestionAnsweringSystem + qa = QuestionAnsweringSystem(request.model_name) + result = qa.answer(request.question, request.context) + else: + result = pipelines["qa"].answer(request.question, request.context) + + if "error" in result: + return QAResponse( + success=False, + question=request.question, + context=request.context, + message=result["error"] + ) + + return QAResponse( + success=True, + question=result["question"], + context=result["context"], + answer=result["answer"], + confidence=result["confidence"] + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/fillmask", response_model=FillMaskResponse) +async def fill_mask(request: FillMaskRequest): + """Fill masked words in text""" + try: + if "fillmask" not in pipelines: + raise HTTPException(status_code=503, detail="Fill-mask pipeline not available") + + if request.model_name: + from src.pipelines.fillmask import FillMaskAnalyzer + fillmask = FillMaskAnalyzer(request.model_name) + result = fillmask.predict(request.text) + else: + result = pipelines["fillmask"].predict(request.text) + + if "error" in result: + return FillMaskResponse(success=False, text=request.text, message=result["error"]) + + return FillMaskResponse( + success=True, + text=result["original_text"], + predictions=result["predictions"] + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/moderation", response_model=ModerationResponse) +async def moderate_content(request: TextRequest): + """Moderate content for inappropriate material""" + try: + if "moderation" not in pipelines: + raise HTTPException(status_code=503, detail="Moderation pipeline not available") + + if request.model_name: + from src.pipelines.moderation import ContentModerator + moderation = ContentModerator(request.model_name) + result = moderation.moderate(request.text) + else: + result = pipelines["moderation"].moderate(request.text) + + if "error" in result: + return ModerationResponse(success=False, text=request.text, message=result["error"]) + + # Map the result fields correctly + flagged = result.get("is_modified", False) or result.get("toxic_score", 0.0) > 0.5 + categories = { + "toxic_score": result.get("toxic_score", 0.0), + "is_modified": result.get("is_modified", False), + "restored_text": result.get("moderated_text", request.text), + "words_replaced": result.get("words_replaced", 0) + } + + return ModerationResponse( + success=True, + text=result["original_text"], + flagged=flagged, + categories=categories + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/textgen", response_model=TextGenResponse) +async def generate_text(request: TextRequest): + """Generate text from a prompt""" + try: + if "textgen" not in pipelines: + raise HTTPException(status_code=503, detail="Text generation pipeline not available") + + logging.info(f"Generating text for prompt: {request.text[:50]}...") + + if request.model_name: + from src.pipelines.textgen import TextGenerator + textgen = TextGenerator(request.model_name) + result = textgen.generate(request.text) + else: + result = pipelines["textgen"].generate(request.text) + + logging.info(f"Generation result keys: {list(result.keys())}") + + if "error" in result: + logging.error(f"Generation error: {result['error']}") + return TextGenResponse(success=False, prompt=request.text, message=result["error"]) + + # Extract the generated text from the first generation + generated_text = "" + if "generations" in result and len(result["generations"]) > 0: + # Get the continuation (text after the prompt) from the first generation + generated_text = result["generations"][0].get("continuation", "") + logging.info(f"Extracted generated text: {generated_text[:100]}...") + else: + logging.warning("No generations found in result") + + return TextGenResponse( + success=True, + prompt=result["prompt"], + generated_text=result["prompt"] + " " + generated_text + ) + except Exception as e: + logging.error(f"TextGen endpoint error: {str(e)}", exc_info=True) + raise HTTPException(status_code=500, detail=str(e)) + + +# Batch processing endpoints +@app.post("/sentiment/batch", response_model=BatchResponse) +async def analyze_sentiment_batch(request: TextListRequest): + """Analyze sentiment for multiple texts""" + try: + if "sentiment" not in pipelines: + raise HTTPException(status_code=503, detail="Sentiment pipeline not available") + + analyzer = pipelines["sentiment"] + if request.model_name: + from src.pipelines.sentiment import SentimentAnalyzer + analyzer = SentimentAnalyzer(request.model_name) + + results = [] + failed_count = 0 + + for text in request.texts: + try: + result = analyzer.analyze(text) + if "error" in result: + failed_count += 1 + results.append(result) + except Exception as e: + failed_count += 1 + results.append({"text": text, "error": str(e)}) + + return BatchResponse( + success=True, + results=results, + processed_count=len(request.texts), + failed_count=failed_count + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/ner/batch", response_model=BatchResponse) +async def extract_entities_batch(request: TextListRequest): + """Extract entities from multiple texts""" + try: + if "ner" not in pipelines: + raise HTTPException(status_code=503, detail="NER pipeline not available") + + ner = pipelines["ner"] + if request.model_name: + from src.pipelines.ner import NamedEntityRecognizer + ner = NamedEntityRecognizer(request.model_name) + + results = [] + failed_count = 0 + + for text in request.texts: + try: + result = ner.recognize(text) + if "error" in result: + failed_count += 1 + results.append(result) + except Exception as e: + failed_count += 1 + results.append({"text": text, "error": str(e)}) + + return BatchResponse( + success=True, + results=results, + processed_count=len(request.texts), + failed_count=failed_count + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/fillmask/batch", response_model=BatchResponse) +async def fill_mask_batch(request: TextListRequest): + """Fill masked words in multiple texts""" + try: + if "fillmask" not in pipelines: + raise HTTPException(status_code=503, detail="Fill-mask pipeline not available") + + fillmask = pipelines["fillmask"] + if request.model_name: + from src.pipelines.fillmask import FillMaskAnalyzer + fillmask = FillMaskAnalyzer(request.model_name) + + results = [] + failed_count = 0 + + for text in request.texts: + try: + result = fillmask.predict(text) + if "error" in result: + failed_count += 1 + results.append(result) + except Exception as e: + failed_count += 1 + results.append({"text": text, "error": str(e)}) + + return BatchResponse( + success=True, + results=results, + processed_count=len(request.texts), + failed_count=failed_count + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/moderation/batch", response_model=BatchResponse) +async def moderate_content_batch(request: TextListRequest): + """Moderate multiple texts for inappropriate content""" + try: + if "moderation" not in pipelines: + raise HTTPException(status_code=503, detail="Moderation pipeline not available") + + moderation = pipelines["moderation"] + if request.model_name: + from src.pipelines.moderation import ContentModerator + moderation = ContentModerator(request.model_name) + + results = [] + failed_count = 0 + + for text in request.texts: + try: + result = moderation.moderate(text) + if "error" in result: + failed_count += 1 + results.append(result) + except Exception as e: + failed_count += 1 + results.append({"text": text, "error": str(e)}) + + return BatchResponse( + success=True, + results=results, + processed_count=len(request.texts), + failed_count=failed_count + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) + + +@app.post("/textgen/batch", response_model=BatchResponse) +async def generate_text_batch(request: TextListRequest): + """Generate text from multiple prompts""" + try: + if "textgen" not in pipelines: + raise HTTPException(status_code=503, detail="Text generation pipeline not available") + + textgen = pipelines["textgen"] + if request.model_name: + from src.pipelines.textgen import TextGenerator + textgen = TextGenerator(request.model_name) + + results = [] + failed_count = 0 + + for text in request.texts: + try: + result = textgen.generate(text) + if "error" in result: + failed_count += 1 + results.append(result) + except Exception as e: + failed_count += 1 + results.append({"text": text, "error": str(e)}) + + return BatchResponse( + success=True, + results=results, + processed_count=len(request.texts), + failed_count=failed_count + ) + except Exception as e: + raise HTTPException(status_code=500, detail=str(e)) \ No newline at end of file diff --git a/src/api/config.py b/src/api/config.py new file mode 100644 index 0000000..f5372fc --- /dev/null +++ b/src/api/config.py @@ -0,0 +1,46 @@ +""" +API configuration settings +""" +from typing import Dict, Any +from src.config.settings import Config + + +class APIConfig: + """Configuration for the FastAPI application""" + + # Server settings + DEFAULT_HOST = "127.0.0.1" + DEFAULT_PORT = 8000 + + # API settings + API_TITLE = "AI Lab API" + API_DESCRIPTION = "API for various AI/ML pipelines using transformers" + API_VERSION = "1.0.0" + + # CORS settings + CORS_ORIGINS = ["*"] # Configure for production + CORS_METHODS = ["*"] + CORS_HEADERS = ["*"] + + # Pipeline settings + MAX_TEXT_LENGTH = 10000 + MAX_BATCH_SIZE = 100 + + @classmethod + def get_all_settings(cls) -> Dict[str, Any]: + """Get all configuration settings""" + return { + "server": { + "default_host": cls.DEFAULT_HOST, + "default_port": cls.DEFAULT_PORT + }, + "api": { + "title": cls.API_TITLE, + "description": cls.API_DESCRIPTION, + "version": cls.API_VERSION + }, + "limits": { + "max_text_length": cls.MAX_TEXT_LENGTH, + "max_batch_size": cls.MAX_BATCH_SIZE + }, + } \ No newline at end of file diff --git a/src/api/models.py b/src/api/models.py new file mode 100644 index 0000000..eae6336 --- /dev/null +++ b/src/api/models.py @@ -0,0 +1,91 @@ +""" +Pydantic models for API requests and responses +""" +from pydantic import BaseModel +from typing import List, Optional, Dict, Any + + +# Request models +class TextRequest(BaseModel): + """Base request model for single text input""" + text: str + model_name: Optional[str] = None + + +class TextListRequest(BaseModel): + """Request model for multiple texts""" + texts: List[str] + model_name: Optional[str] = None + + +class QARequest(BaseModel): + """Request model for question answering""" + question: str + context: str + model_name: Optional[str] = None + + +class FillMaskRequest(BaseModel): + """Request model for fill mask task""" + text: str + model_name: Optional[str] = None + + +# Response models +class BaseResponse(BaseModel): + """Base response model""" + success: bool + message: Optional[str] = None + + +class SentimentResponse(BaseResponse): + """Response model for sentiment analysis""" + text: str + sentiment: Optional[str] = None + confidence: Optional[float] = None + + +class NERResponse(BaseResponse): + """Response model for Named Entity Recognition""" + text: str + entities: Optional[List[Dict[str, Any]]] = None + + +class QAResponse(BaseResponse): + """Response model for Question Answering""" + question: str + context: str + answer: Optional[str] = None + confidence: Optional[float] = None + + +class FillMaskResponse(BaseResponse): + """Response model for Fill Mask""" + text: str + predictions: Optional[List[Dict[str, Any]]] = None + + +class ModerationResponse(BaseResponse): + """Response model for Content Moderation""" + text: str + flagged: Optional[bool] = None + categories: Optional[Dict[str, Any]] = None + + +class TextGenResponse(BaseResponse): + """Response model for Text Generation""" + prompt: str + generated_text: Optional[str] = None + + +class BatchResponse(BaseResponse): + """Response model for batch processing""" + results: List[Dict[str, Any]] + processed_count: int + failed_count: int + + +class ErrorResponse(BaseResponse): + """Response model for errors""" + error: str + details: Optional[str] = None \ No newline at end of file diff --git a/src/cli/display.py b/src/cli/display.py index df2dc27..6379541 100644 --- a/src/cli/display.py +++ b/src/cli/display.py @@ -190,3 +190,78 @@ class DisplayFormatter: output.append(f" • {entity} ({count}x)") return "\n".join(output) + + @staticmethod + def format_qa_result(result: Dict[str, Any]) -> str: + """Format Question Answering result for display""" + if "error" in result: + return f"❌ {result['error']}" + + output = [] + output.append(f"❓ Question: {result['question']}") + + # Confidence indicator + confidence = result['confidence'] + confidence_emoji = "✅" if result['is_confident'] else "⚠️" + confidence_bar = "█" * int(confidence * 10) + + output.append(f"{confidence_emoji} Answer: {result['answer']}") + output.append(f"📊 Confidence: {result['confidence_level']} ({confidence:.1%}) {confidence_bar}") + + if not result['is_confident']: + output.append("⚠️ Low confidence - answer might not be reliable") + + output.append(f"\n📍 Position: characters {result['start_position']}-{result['end_position']}") + output.append(f"📄 Context with answer highlighted:") + output.append(f" {result['highlighted_context']}") + + return "\n".join(output) + + @staticmethod + def format_qa_context_analysis(analysis: Dict[str, Any]) -> str: + """Format QA context analysis for display""" + if "error" in analysis: + return f"❌ {analysis['error']}" + + output = [] + output.append("✅ Context set successfully!") + output.append(f"📊 Context Statistics:") + + stats = analysis['context_stats'] + output.append(f" • Words: {stats['word_count']}") + output.append(f" • Sentences: ~{stats['sentence_count']}") + output.append(f" • Characters: {stats['character_count']}") + + if analysis['suggested_questions']: + output.append(f"\n💡 Suggested question types:") + for suggestion in analysis['suggested_questions']: + output.append(f" • {suggestion}") + + if analysis['tips']: + output.append(f"\n📝 Tips for good questions:") + for tip in analysis['tips']: + output.append(f" • {tip}") + + return "\n".join(output) + + @staticmethod + def format_qa_multiple_result(result: Dict[str, Any]) -> str: + """Format multiple QA results for display""" + if "error" in result: + return f"❌ {result['error']}" + + output = [] + output.append(f"📊 Multiple Questions Analysis") + output.append("=" * 50) + output.append(f"Total Questions: {result['total_questions']}") + output.append(f"Successfully Processed: {result['processed_questions']}") + output.append(f"Confident Answers: {result['confident_answers']}") + output.append(f"Average Confidence: {result['average_confidence']:.1%}") + + output.append(f"\n📋 Results:") + for qa_result in result['results']: + confidence_emoji = "✅" if qa_result['is_confident'] else "⚠️" + output.append(f"\n{qa_result['question_number']}. {qa_result['question']}") + output.append(f" {confidence_emoji} {qa_result['answer']} ({qa_result['confidence']:.1%})") + + return "\n".join(output) diff --git a/src/commands/__init__.py b/src/commands/__init__.py index e8ea5d3..344e3d6 100644 --- a/src/commands/__init__.py +++ b/src/commands/__init__.py @@ -6,5 +6,6 @@ from .fillmask import FillMaskCommand from .textgen import TextGenCommand from .moderation import ModerationCommand from .ner import NERCommand +from .qa import QACommand -__all__ = ['SentimentCommand', 'FillMaskCommand', 'TextGenCommand', 'ModerationCommand', 'NERCommand'] +__all__ = ['SentimentCommand', 'FillMaskCommand', 'TextGenCommand', 'ModerationCommand', 'NERCommand', 'QACommand'] diff --git a/src/commands/qa.py b/src/commands/qa.py new file mode 100644 index 0000000..28f4543 --- /dev/null +++ b/src/commands/qa.py @@ -0,0 +1,214 @@ +from src.cli.base import CLICommand +from src.cli.display import DisplayFormatter +from src.pipelines.qa import QuestionAnsweringSystem + + +class QACommand(CLICommand): + """Interactive Question Answering command""" + + def __init__(self): + self.qa_system = None + self.current_context = None + self.session_questions = [] + + @property + def name(self) -> str: + return "qa" + + @property + def description(self) -> str: + return "Question Answering - Ask questions about a given text" + + def _initialize_qa_system(self): + """Lazy initialization of the QA system""" + if self.qa_system is None: + print("🔄 Loading Question Answering model...") + self.qa_system = QuestionAnsweringSystem() + DisplayFormatter.show_success("QA model loaded!") + + def _show_instructions(self): + """Show usage instructions and examples""" + print("\n❓ Question Answering System") + print("Ask questions about a text context and get precise answers.") + print("\n📝 How it works:") + print(" 1. First, provide a context (text containing information)") + print(" 2. Then ask questions about that context") + print(" 3. The system extracts answers directly from the text") + print("\n💡 Example context:") + print(" 'Albert Einstein was born in 1879 in Germany. He developed the theory of relativity.'") + print("💡 Example questions:") + print(" - When was Einstein born?") + print(" - Where was Einstein born?") + print(" - What theory did Einstein develop?") + print("\n🎛️ Commands:") + print(" 'back' - Return to main menu") + print(" 'help' - Show these instructions") + print(" 'context' - Set new context") + print(" 'multi' - Ask multiple questions at once") + print(" 'session' - Review session history") + print(" 'settings' - Adjust confidence threshold") + print("-" * 70) + + def _set_context(self): + """Allow user to set or change the context""" + print("\n📄 Set Context") + print("Enter the text that will serve as context for your questions.") + print("You can enter multiple lines. Type 'done' when finished.") + print("-" * 50) + + lines = [] + while True: + line = input("📝 ").strip() + if line.lower() == 'done': + break + if line: + lines.append(line) + + if not lines: + DisplayFormatter.show_warning("No context provided") + return False + + self.current_context = " ".join(lines) + + # Analyze context + analysis = self.qa_system.interactive_qa(self.current_context) + if "error" in analysis: + DisplayFormatter.show_error(analysis["error"]) + return False + + formatted_analysis = DisplayFormatter.format_qa_context_analysis(analysis) + print(formatted_analysis) + + return True + + def _ask_single_question(self): + """Ask a single question about the current context""" + if not self.current_context: + DisplayFormatter.show_warning("Please set a context first using 'context' command") + return + + question = input("\n❓ Your question: ").strip() + + if not question: + DisplayFormatter.show_warning("Please enter a question") + return + + DisplayFormatter.show_loading("Finding answer...") + result = self.qa_system.answer(question, self.current_context) + + if "error" not in result: + self.session_questions.append(result) + + formatted_result = DisplayFormatter.format_qa_result(result) + print(formatted_result) + + def _multi_question_mode(self): + """Allow asking multiple questions at once""" + if not self.current_context: + DisplayFormatter.show_warning("Please set a context first using 'context' command") + return + + print("\n❓ Multiple Questions Mode") + print("Enter your questions one by one. Type 'done' when finished.") + print("-" * 50) + + questions = [] + while True: + question = input(f"Question #{len(questions)+1}: ").strip() + if question.lower() == 'done': + break + if question: + questions.append(question) + + if not questions: + DisplayFormatter.show_warning("No questions provided") + return + + DisplayFormatter.show_loading(f"Processing {len(questions)} questions...") + result = self.qa_system.answer_multiple(questions, self.current_context) + + if "error" not in result: + self.session_questions.extend(result["results"]) + + formatted_result = DisplayFormatter.format_qa_multiple_result(result) + print(formatted_result) + + def _show_session_history(self): + """Show the history of questions asked in this session""" + if not self.session_questions: + DisplayFormatter.show_warning("No questions asked in this session yet") + return + + print(f"\n📚 Session History ({len(self.session_questions)} questions)") + print("=" * 60) + + for i, qa in enumerate(self.session_questions, 1): + confidence_emoji = "✅" if qa["is_confident"] else "⚠️" + print(f"\n{i}. {qa['question']}") + print(f" {confidence_emoji} {qa['answer']} (confidence: {qa['confidence']:.1%})") + + def _adjust_settings(self): + """Allow user to adjust QA settings""" + current_threshold = self.qa_system.confidence_threshold + print(f"\n⚙️ Current Settings:") + print(f"Confidence threshold: {current_threshold:.2f}") + print("\nLower threshold = more answers accepted (less strict)") + print("Higher threshold = fewer answers accepted (more strict)") + + try: + new_threshold = input(f"Enter new threshold (0.0-1.0, current: {current_threshold}): ").strip() + if new_threshold: + threshold = float(new_threshold) + self.qa_system.set_confidence_threshold(threshold) + DisplayFormatter.show_success(f"Threshold set to {threshold:.2f}") + except ValueError: + DisplayFormatter.show_error("Invalid threshold value") + + def run(self): + """Run interactive Question Answering""" + self._initialize_qa_system() + self._show_instructions() + + while True: + if self.current_context: + context_preview = (self.current_context[:50] + "...") if len(self.current_context) > 50 else self.current_context + prompt = f"\n💬 [{context_preview}] Ask a question: " + else: + prompt = "\n💬 Enter command or set context first: " + + user_input = input(prompt).strip() + + if user_input.lower() == 'back': + break + elif user_input.lower() == 'help': + self._show_instructions() + continue + elif user_input.lower() == 'context': + self._set_context() + continue + elif user_input.lower() == 'multi': + self._multi_question_mode() + continue + elif user_input.lower() == 'session': + self._show_session_history() + continue + elif user_input.lower() == 'settings': + self._adjust_settings() + continue + + if not user_input: + DisplayFormatter.show_warning("Please enter a question or command") + continue + + # If we have a context and user input is not a command, treat it as a question + if self.current_context: + DisplayFormatter.show_loading("Finding answer...") + result = self.qa_system.answer(user_input, self.current_context) + + if "error" not in result: + self.session_questions.append(result) + + formatted_result = DisplayFormatter.format_qa_result(result) + print(formatted_result) + else: + DisplayFormatter.show_warning("Please set a context first using 'context' command") diff --git a/src/config/settings.py b/src/config/settings.py index 8d9b2e8..95ccbd1 100644 --- a/src/config/settings.py +++ b/src/config/settings.py @@ -3,6 +3,7 @@ Global project configuration """ from pathlib import Path from typing import Dict, Any +import torch class Config: @@ -14,11 +15,12 @@ class Config: # Default models DEFAULT_MODELS = { - "sentiment": "cardiffnlp/twitter-roberta-base-sentiment-latest", - "fillmask": "distilbert-base-uncased", - "textgen": "gpt2", - "moderation": "unitary/toxic-bert", - "ner": "dbmdz/bert-large-cased-finetuned-conll03-english", + "sentiment": "distilbert-base-uncased-finetuned-sst-2-english", + "fillmask": "bert-base-uncased", + "textgen": "gpt2", + "ner": "dslim/bert-base-NER", + "moderation":"unitary/toxic-bert", + "qa": "distilbert-base-cased-distilled-squad", } # Interface @@ -28,6 +30,7 @@ class Config: # Performance MAX_BATCH_SIZE = 32 DEFAULT_MAX_LENGTH = 512 + USE_GPU = torch.cuda.is_available() # Auto-detect GPU availability @classmethod def get_model(cls, pipeline_name: str) -> str: diff --git a/src/main.py b/src/main.py index aa16046..4067ca8 100644 --- a/src/main.py +++ b/src/main.py @@ -1,8 +1,9 @@ #!/usr/bin/env python3 """ -CLI entry point for AI Lab +Entry point for AI Lab - supports both CLI and API modes """ import sys +import argparse from pathlib import Path # Add parent directory to PYTHONPATH @@ -13,13 +14,14 @@ from src.commands import ( FillMaskCommand, ModerationCommand, NERCommand, + QACommand, SentimentCommand, TextGenCommand, ) -def main(): - """Main CLI function""" +def run_cli(): + """Run the CLI interface""" try: # Create CLI interface cli = InteractiveCLI() @@ -31,6 +33,7 @@ def main(): TextGenCommand, ModerationCommand, NERCommand, + QACommand, ] for command in commands_to_register: cli.register_command(command()) @@ -39,11 +42,100 @@ def main(): cli.run() except KeyboardInterrupt: - print("\n👋 Stopping program") + print("\n👋 Stopping CLI") except Exception as e: - print(f"❌ Error: {e}") + print(f"❌ CLI Error: {e}") sys.exit(1) +def run_api(host: str = "127.0.0.1", port: int = 8000, reload: bool = False): + """Run the FastAPI server""" + try: + import uvicorn + + print(f"🚀 Starting AI Lab API server...") + print(f"📡 Server will be available at: http://{host}:{port}") + print(f"📚 API documentation: http://{host}:{port}/docs") + print(f"🔄 Reload mode: {'enabled' if reload else 'disabled'}") + + # Load the main FastAPI application + try: + from src.api.app import app + app_module = "src.api.app:app" + print("📊 Loading AI Lab API with all pipelines") + except ImportError as e: + print(f"❌ Error: Could not load API application: {e}") + print("� Make sure FastAPI dependencies are installed:") + print(" poetry add fastapi uvicorn[standard] pydantic") + sys.exit(1) + + uvicorn.run( + app_module, + host=host, + port=port, + reload=reload, + log_level="info" + ) + except ImportError: + print("❌ FastAPI dependencies not installed. Please run: pip install fastapi uvicorn") + print("Or with poetry: poetry add fastapi uvicorn[standard]") + sys.exit(1) + except Exception as e: + print(f"❌ API Error: {e}") + sys.exit(1) + + +def main(): + """Main entry point with argument parsing""" + parser = argparse.ArgumentParser( + description="AI Lab - CLI and API for AI/ML pipelines", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=""" +Examples: + %(prog)s # Run CLI interface (default) + %(prog)s --mode cli # Run CLI interface explicitly + %(prog)s --mode api # Run API server + %(prog)s --mode api --port 8080 # Run API server on port 8080 + %(prog)s --mode api --reload # Run API server with auto-reload + """ + ) + + parser.add_argument( + "--mode", + choices=["cli", "api"], + default="cli", + help="Choose between CLI or API mode (default: cli)" + ) + + # API specific arguments + parser.add_argument( + "--host", + default="127.0.0.1", + help="API server host (default: 127.0.0.1)" + ) + + parser.add_argument( + "--port", + type=int, + default=8000, + help="API server port (default: 8000)" + ) + + parser.add_argument( + "--reload", + action="store_true", + help="Enable auto-reload for API development" + ) + + args = parser.parse_args() + + if args.mode == "cli": + print("🖥️ Starting CLI mode...") + run_cli() + elif args.mode == "api": + print("🌐 Starting API mode...") + run_api(host=args.host, port=args.port, reload=args.reload) + + if __name__ == "__main__": main() diff --git a/src/pipelines/__init__.py b/src/pipelines/__init__.py index 5ad9ab6..04a777b 100644 --- a/src/pipelines/__init__.py +++ b/src/pipelines/__init__.py @@ -6,6 +6,7 @@ from .fillmask import FillMaskAnalyzer from .textgen import TextGenerator from .moderation import ContentModerator from .ner import NamedEntityRecognizer +from .qa import QuestionAnsweringSystem from .template import TemplatePipeline -__all__ = ['SentimentAnalyzer', 'FillMaskAnalyzer', 'TextGenerator', 'ContentModerator', 'NamedEntityRecognizer', 'TemplatePipeline'] +__all__ = ['SentimentAnalyzer', 'FillMaskAnalyzer', 'TextGenerator', 'ContentModerator', 'NamedEntityRecognizer', 'QuestionAnsweringSystem', 'TemplatePipeline'] diff --git a/src/pipelines/fillmask.py b/src/pipelines/fillmask.py index 99817f8..eb39000 100644 --- a/src/pipelines/fillmask.py +++ b/src/pipelines/fillmask.py @@ -46,7 +46,7 @@ class FillMaskAnalyzer: mask_predictions = [ { "token": pred["token_str"], - "score": round(pred["score"], 4), + "score": round(float(pred["score"]), 4), "sequence": pred["sequence"] } for pred in mask_results @@ -66,7 +66,7 @@ class FillMaskAnalyzer: predictions = [ { "token": pred["token_str"], - "score": round(pred["score"], 4), + "score": round(float(pred["score"]), 4), "sequence": pred["sequence"] } for pred in results diff --git a/src/pipelines/moderation.py b/src/pipelines/moderation.py index 25738eb..b286ebc 100644 --- a/src/pipelines/moderation.py +++ b/src/pipelines/moderation.py @@ -70,7 +70,7 @@ class ContentModerator: "original_text": text, "moderated_text": text, "is_modified": False, - "toxic_score": toxic_score, + "toxic_score": float(toxic_score), "words_replaced": 0 } @@ -81,8 +81,8 @@ class ContentModerator: "original_text": text, "moderated_text": moderated_text, "is_modified": True, - "toxic_score": toxic_score, - "words_replaced": words_replaced + "toxic_score": float(toxic_score), + "words_replaced": int(words_replaced) } except Exception as e: diff --git a/src/pipelines/ner.py b/src/pipelines/ner.py index 2d331ef..a059474 100644 --- a/src/pipelines/ner.py +++ b/src/pipelines/ner.py @@ -58,9 +58,9 @@ class NamedEntityRecognizer: processed_entity = { "text": entity["word"], "label": entity_type, - "confidence": round(entity["score"], 4), - "start": entity["start"], - "end": entity["end"], + "confidence": round(float(entity["score"]), 4), + "start": int(entity["start"]), + "end": int(entity["end"]), "emoji": self.entity_colors.get(entity_type, "🏷️") } @@ -70,7 +70,7 @@ class NamedEntityRecognizer: if entity_type not in entity_stats: entity_stats[entity_type] = {"count": 0, "entities": []} entity_stats[entity_type]["count"] += 1 - entity_stats[entity_type]["entities"].append(entity["word"]) + entity_stats[entity_type]["entities"].append(str(entity["word"])) # Create highlighted text highlighted_text = self._highlight_entities(text, filtered_entities) @@ -81,7 +81,7 @@ class NamedEntityRecognizer: "entities": filtered_entities, "entity_stats": entity_stats, "total_entities": len(filtered_entities), - "confidence_threshold": confidence_threshold + "confidence_threshold": float(confidence_threshold) } except Exception as e: diff --git a/src/pipelines/qa.py b/src/pipelines/qa.py new file mode 100644 index 0000000..be3aecb --- /dev/null +++ b/src/pipelines/qa.py @@ -0,0 +1,266 @@ +from transformers import pipeline +from typing import Dict, List, Optional, Tuple +from src.config import Config +import re + + +class QuestionAnsweringSystem: + """Question Answering system using transformers""" + + def __init__(self, model_name: Optional[str] = None): + """ + Initialize the question-answering pipeline + + Args: + model_name: Name of the model to use (optional) + """ + self.model_name = model_name or Config.get_model("qa") + print(f"Loading Question Answering model: {self.model_name}") + self.pipeline = pipeline("question-answering", model=self.model_name) + print("QA model loaded successfully!") + + # Default confidence threshold + self.confidence_threshold = 0.1 + + def answer(self, question: str, context: str, max_answer_len: int = 50) -> Dict: + """ + Answer a question based on the given context + + Args: + question: Question to answer + context: Context text containing the answer + max_answer_len: Maximum length of the answer + + Returns: + Dictionary with answer, score, and position information + """ + if not question.strip(): + return {"error": "Empty question"} + + if not context.strip(): + return {"error": "Empty context"} + + try: + result = self.pipeline( + question=question, + context=context, + max_answer_len=max_answer_len + ) + + confidence_level = self._get_confidence_level(result["score"]) + highlighted_context = self._highlight_answer_in_context( + context, result["answer"], result["start"], result["end"] + ) + + return { + "question": question, + "context": context, + "answer": result["answer"], + "confidence": round(result["score"], 4), + "confidence_level": confidence_level, + "start_position": result["start"], + "end_position": result["end"], + "highlighted_context": highlighted_context, + "is_confident": result["score"] >= self.confidence_threshold + } + + except Exception as e: + return {"error": f"QA processing error: {str(e)}"} + + def _get_confidence_level(self, score: float) -> str: + """ + Convert numerical score to confidence level + + Args: + score: Confidence score (0-1) + + Returns: + Confidence level description + """ + if score >= 0.8: + return "Very High" + elif score >= 0.6: + return "High" + elif score >= 0.4: + return "Medium" + elif score >= 0.2: + return "Low" + else: + return "Very Low" + + def _highlight_answer_in_context(self, context: str, answer: str, start: int, end: int) -> str: + """ + Highlight the answer within the context + + Args: + context: Original context + answer: Extracted answer + start: Start position of answer + end: End position of answer + + Returns: + Context with highlighted answer + """ + if start < 0 or end > len(context): + return context + + before = context[:start] + highlighted_answer = f"**{answer}**" + after = context[end:] + + return before + highlighted_answer + after + + def answer_multiple(self, questions: List[str], context: str, max_answer_len: int = 50) -> Dict: + """ + Answer multiple questions for the same context + + Args: + questions: List of questions to answer + context: Context text + max_answer_len: Maximum length of answers + + Returns: + Dictionary with all answers and summary statistics + """ + if not questions: + return {"error": "No questions provided"} + + if not context.strip(): + return {"error": "Empty context"} + + results = [] + confident_answers = 0 + total_confidence = 0 + + for i, question in enumerate(questions, 1): + result = self.answer(question, context, max_answer_len) + + if "error" not in result: + results.append({ + "question_number": i, + **result + }) + + if result["is_confident"]: + confident_answers += 1 + total_confidence += result["confidence"] + + if not results: + return {"error": "No valid questions processed"} + + average_confidence = total_confidence / len(results) if results else 0 + + return { + "context": context, + "total_questions": len(questions), + "processed_questions": len(results), + "confident_answers": confident_answers, + "average_confidence": round(average_confidence, 4), + "confidence_threshold": self.confidence_threshold, + "results": results + } + + def interactive_qa(self, context: str) -> Dict: + """ + Prepare context for interactive Q&A session + + Args: + context: Context text for questions + + Returns: + Context analysis and preparation info + """ + if not context.strip(): + return {"error": "Empty context"} + + # Basic context analysis + word_count = len(context.split()) + sentence_count = len([s for s in context.split('.') if s.strip()]) + char_count = len(context) + + # Suggest question types based on content + suggested_questions = self._generate_question_suggestions(context) + + return { + "context": context, + "context_stats": { + "word_count": word_count, + "sentence_count": sentence_count, + "character_count": char_count + }, + "suggested_questions": suggested_questions, + "tips": [ + "Ask specific questions about facts mentioned in the text", + "Use question words: Who, What, When, Where, Why, How", + "Keep questions clear and focused", + "The answer should be present in the provided context" + ] + } + + def _generate_question_suggestions(self, context: str) -> List[str]: + """ + Generate suggested questions based on context analysis + + Args: + context: Context text + + Returns: + List of suggested question templates + """ + suggestions = [] + + # Check for common patterns and suggest relevant questions + if re.search(r'\b\d{4}\b', context): # Years + suggestions.append("When did [event] happen?") + + if re.search(r'\b[A-Z][a-z]+ [A-Z][a-z]+\b', context): # Names + suggestions.append("Who is [person name]?") + + if re.search(r'\b(founded|created|established|built)\b', context, re.IGNORECASE): + suggestions.append("Who founded/created [organization]?") + + if re.search(r'\b(located|situated|based)\b', context, re.IGNORECASE): + suggestions.append("Where is [place/organization] located?") + + if re.search(r'\b(because|due to|reason)\b', context, re.IGNORECASE): + suggestions.append("Why did [event] happen?") + + if re.search(r'\b(how|method|process)\b', context, re.IGNORECASE): + suggestions.append("How does [process] work?") + + if not suggestions: + suggestions = [ + "What is the main topic of this text?", + "Who are the key people mentioned?", + "What important events are described?" + ] + + return suggestions[:5] # Limit to 5 suggestions + + def set_confidence_threshold(self, threshold: float): + """ + Set the confidence threshold for answers + + Args: + threshold: Threshold between 0 and 1 + """ + if 0 <= threshold <= 1: + self.confidence_threshold = threshold + else: + raise ValueError("Threshold must be between 0 and 1") + + def answer_batch(self, qa_pairs: List[Tuple[str, str]], max_answer_len: int = 50) -> List[Dict]: + """ + Process multiple question-context pairs + + Args: + qa_pairs: List of (question, context) tuples + max_answer_len: Maximum length of answers + + Returns: + List of QA results + """ + return [ + self.answer(question, context, max_answer_len) + for question, context in qa_pairs + ] diff --git a/src/pipelines/textgen.py b/src/pipelines/textgen.py index 8c6b5bf..e29d7c8 100644 --- a/src/pipelines/textgen.py +++ b/src/pipelines/textgen.py @@ -15,17 +15,29 @@ class TextGenerator: """ self.model_name = model_name or Config.get_model("textgen") print(f"Loading text generation model: {self.model_name}") - self.pipeline = pipeline("text-generation", model=self.model_name) + + # Initialize pipeline with proper device configuration + self.pipeline = pipeline( + "text-generation", + model=self.model_name, + device=0 if Config.USE_GPU else -1, + torch_dtype="auto" + ) + + # Set pad token if not available + if self.pipeline.tokenizer.pad_token is None: + self.pipeline.tokenizer.pad_token = self.pipeline.tokenizer.eos_token + print("Model loaded successfully!") - def generate(self, prompt: str, max_length: int = 100, num_return_sequences: int = 1, + def generate(self, prompt: str, max_new_tokens: int = 100, num_return_sequences: int = 1, temperature: float = 1.0, do_sample: bool = True) -> Dict: """ Generate text from a prompt Args: prompt: Input text prompt - max_length: Maximum length of generated text + max_new_tokens: Maximum number of new tokens to generate num_return_sequences: Number of sequences to generate temperature: Sampling temperature (higher = more random) do_sample: Whether to use sampling @@ -39,11 +51,12 @@ class TextGenerator: try: results = self.pipeline( prompt, - max_length=max_length, + max_new_tokens=max_new_tokens, num_return_sequences=num_return_sequences, temperature=temperature, do_sample=do_sample, - pad_token_id=self.pipeline.tokenizer.eos_token_id + pad_token_id=self.pipeline.tokenizer.eos_token_id, + return_full_text=True ) generations = [ @@ -57,7 +70,7 @@ class TextGenerator: return { "prompt": prompt, "parameters": { - "max_length": max_length, + "max_new_tokens": max_new_tokens, "num_sequences": num_return_sequences, "temperature": temperature, "do_sample": do_sample diff --git a/ui/index.html b/ui/index.html new file mode 100644 index 0000000..1ca4ea6 --- /dev/null +++ b/ui/index.html @@ -0,0 +1,358 @@ + + + + + + + AI Lab - Interface de Test + + + + + + + +
+ +
+
+
+ +
+
+ API Déconnectée +
+
+
+
+ + +
+
+ +
+
+

⚙️ Configuration API

+
+
+ + +
+ +
+
+
+ + + + + +
+ +
+
+
+

💭 Analyse de Sentiment

+

Analysez l'émotion et le ton d'un texte

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

🏷️ Reconnaissance d'Entités Nommées

+

Identifiez les personnes, lieux et organisations

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

❓ Questions-Réponses

+

Obtenez des réponses basées sur un contexte

+
+
+
+
+ + +
+
+ + +
+
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

🎭 Complétion de Masques

+

Prédisez les mots manquants avec [MASK]

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

🛡️ Modération de Contenu

+

Détectez le contenu toxique ou inapproprié

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

✍️ Génération de Texte

+

Générez du texte créatif à partir d'un prompt

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+ + +
+
+
+

📦 Traitement par Lot

+

Analysez plusieurs textes simultanément

+
+
+
+ + +
+
+ + +
+
+ + +
+
+
+
+
+
+
+
+
+ + + + + \ No newline at end of file diff --git a/ui/script.js b/ui/script.js new file mode 100644 index 0000000..233d0a9 --- /dev/null +++ b/ui/script.js @@ -0,0 +1,1057 @@ +// Configuration globale +let apiUrl = "http://localhost:8000"; + +// État de l'application +const appState = { + currentTab: "sentiment", + apiConnected: false +}; + +// Initialisation de l'application +document.addEventListener("DOMContentLoaded", function () { + initializeApp(); +}); + +function initializeApp() { + setupEventListeners(); + checkApiStatus(); + loadExamplesData(); +} + +function setupEventListeners() { + // Navigation entre les onglets + document.querySelectorAll(".nav-tab").forEach((tab) => { + tab.addEventListener("click", (e) => { + const tabName = e.currentTarget.dataset.tab; + switchTab(tabName); + }); + }); + + // Changement d'URL API + document.getElementById("apiUrl").addEventListener("change", function () { + apiUrl = this.value; + checkApiStatus(); + }); + + // Soumission des formulaires + setupFormHandlers(); +} + +function setupFormHandlers() { + // Empêcher la soumission par défaut de tous les formulaires + document.querySelectorAll("form").forEach((form) => { + form.addEventListener("submit", (e) => { + e.preventDefault(); + }); + }); +} + +// Navigation +function switchTab(tabName) { + // Mise à jour de l'état + appState.currentTab = tabName; + + // Mise à jour des onglets + document.querySelectorAll(".nav-tab").forEach((tab) => { + tab.classList.remove("active"); + }); + document.querySelector(`[data-tab="${tabName}"]`).classList.add("active"); + + // Mise à jour du contenu + document.querySelectorAll(".tab-content").forEach((content) => { + content.classList.remove("active"); + }); + document.getElementById(tabName).classList.add("active"); +} + +// Vérification du statut de l'API +async function checkApiStatus() { + const statusElement = document.getElementById("apiStatus"); + const indicator = statusElement.querySelector(".status-indicator"); + const statusText = statusElement.querySelector("span"); + const testButton = document.querySelector("button[onclick='checkApiStatus()']"); + + // Feedback visuel pendant le test + if (testButton) { + testButton.disabled = true; + testButton.innerHTML = 'Test en cours...'; + testButton.classList.add("loading"); + } + + // Indicateur de chargement + indicator.className = "status-indicator loading"; + statusText.textContent = "Test de connexion..."; + + try { + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 5000); + + const response = await fetch(`${apiUrl}/health`, { + method: "GET", + signal: controller.signal + }); + + clearTimeout(timeoutId); + + if (response.ok) { + const healthData = await response.json(); + appState.apiConnected = true; + indicator.className = "status-indicator online"; + statusText.textContent = "API Connectée"; + + // Notification de succès + showNotification("✅ Connexion API établie avec succès!", "success"); + + // Afficher les détails de l'API + showApiDetails(healthData); + } else { + throw new Error(`Erreur HTTP ${response.status}`); + } + } catch (error) { + appState.apiConnected = false; + indicator.className = "status-indicator offline"; + statusText.textContent = "API Déconnectée"; + + let errorMessage = "❌ Impossible de se connecter à l'API"; + if (error.name === "AbortError") { + errorMessage += " (Timeout)"; + } else if (error.message.includes("fetch")) { + errorMessage += " (Serveur inaccessible)"; + } else { + errorMessage += ` (${error.message})`; + } + + showNotification(errorMessage, "error"); + console.warn("Erreur de connexion API:", error.message); + } finally { + // Restaurer le bouton + if (testButton) { + testButton.disabled = false; + testButton.innerHTML = '🔄Tester la connexion'; + testButton.classList.remove("loading"); + } + } +} + +// Système de notifications +function showNotification(message, type = "info", duration = 5000) { + // Créer le conteneur de notifications s'il n'existe pas + let notificationContainer = document.getElementById("notification-container"); + if (!notificationContainer) { + notificationContainer = document.createElement("div"); + notificationContainer.id = "notification-container"; + notificationContainer.className = "notification-container"; + document.body.appendChild(notificationContainer); + } + + // Créer la notification + const notification = document.createElement("div"); + notification.className = `notification notification-${type}`; + notification.innerHTML = ` +
+ ${message} + +
+ `; + + // Ajouter l'animation d'entrée + notification.style.transform = "translateX(100%)"; + notification.style.opacity = "0"; + notificationContainer.appendChild(notification); + + // Animation d'entrée + setTimeout(() => { + notification.style.transform = "translateX(0)"; + notification.style.opacity = "1"; + }, 10); + + // Suppression automatique + if (duration > 0) { + setTimeout(() => { + if (notification.parentElement) { + notification.style.transform = "translateX(100%)"; + notification.style.opacity = "0"; + setTimeout(() => notification.remove(), 300); + } + }, duration); + } +} + +// Affichage des détails de l'API +function showApiDetails(healthData) { + const existingDetails = document.getElementById("api-details"); + if (existingDetails) { + existingDetails.remove(); + } + + const configSection = document.querySelector(".config-section .config-card"); + const detailsDiv = document.createElement("div"); + detailsDiv.id = "api-details"; + detailsDiv.className = "api-details"; + detailsDiv.innerHTML = ` +
+

📊 Détails de l'API

+ +
+
+
+ Statut: + ✅ ${healthData.status || "healthy"} +
+
+ Pipelines chargés: + ${healthData.pipelines_loaded || 0} +
+
+ Pipelines disponibles: + ${(healthData.available_pipelines || []).join(", ") || "Aucun"} +
+
+ `; + + configSection.appendChild(detailsDiv); +} +async function makeApiRequest(endpoint, data) { + if (!appState.apiConnected) { + throw new Error("API non connectée. Vérifiez la configuration."); + } + + try { + const response = await fetch(`${apiUrl}${endpoint}`, { + method: "POST", + headers: { + "Content-Type": "application/json" + }, + body: JSON.stringify(data) + }); + + const result = await response.json(); + + if (!response.ok) { + throw new Error(result.detail || `Erreur ${response.status}`); + } + + return result; + } catch (error) { + if (error.name === "TypeError" && error.message.includes("fetch")) { + throw new Error("Impossible de contacter l'API. Vérifiez que le serveur est démarré."); + } + throw error; + } +} + +// Gestion de l'affichage des résultats améliorée +function showLoading(containerId) { + const container = document.getElementById(containerId); + container.innerHTML = ` +
+
+
+
+
+

🔄 Traitement en cours...

+

Analyse de votre texte par l'IA

+
+
+
+
+
+ `; + container.classList.add("show"); +} + +function showResult(containerId, data, isError = false) { + const container = document.getElementById(containerId); + const headerClass = isError ? "error" : "success"; + const icon = isError ? "❌" : "✅"; + const title = isError ? "Erreur" : "Résultat"; + + let formattedContent; + if (isError) { + formattedContent = formatErrorResult(data); + } else { + formattedContent = formatResult(data, containerId); + } + + // Ajouter un timestamp + const timestamp = new Date().toLocaleTimeString("fr-FR"); + + container.innerHTML = ` +
+
+
+ ${icon} +
+ ${title} + ${timestamp} +
+
+
+ + + +
+
+
+ ${formattedContent} +
+
+ `; + container.classList.add("show"); + + // Animation d'entrée + const resultCard = container.querySelector(".result-card"); + resultCard.style.transform = "translateY(20px)"; + resultCard.style.opacity = "0"; + setTimeout(() => { + resultCard.style.transform = "translateY(0)"; + resultCard.style.opacity = "1"; + }, 10); +} + +function formatErrorResult(error) { + let errorMessage = "Une erreur inattendue s'est produite"; + let suggestions = []; + + if (typeof error === "object" && error.error) { + errorMessage = error.error; + + // Suggestions basées sur le type d'erreur + if (errorMessage.includes("API non connectée")) { + suggestions.push("Vérifiez que le serveur API est démarré"); + suggestions.push("Testez la connexion avec le bouton 'Tester la connexion'"); + } else if (errorMessage.includes("requis")) { + suggestions.push("Assurez-vous que tous les champs obligatoires sont remplis"); + } else if (errorMessage.includes("MASK")) { + suggestions.push("Utilisez [MASK] dans votre texte pour le fill-mask"); + } + } + + let suggestionsHtml = ""; + if (suggestions.length > 0) { + suggestionsHtml = ` +
+

💡 Suggestions:

+ +
+ `; + } + + return ` +
+
+ Message d'erreur: +

${errorMessage}

+
+ ${suggestionsHtml} +
+ `; +} + +function formatResult(data, containerId) { + const type = containerId.replace("Result", ""); + + switch (type) { + case "sentiment": + return formatSentimentResult(data); + case "ner": + return formatNerResult(data); + case "qa": + return formatQaResult(data); + case "fillmask": + return formatFillmaskResult(data); + case "moderation": + return formatModerationResult(data); + case "textgen": + return formatTextgenResult(data); + case "batch": + return formatBatchResult(data); + default: + return `
${JSON.stringify(data, null, 2)}
`; + } +} + +function formatSentimentResult(data) { + const sentiment = data.sentiment || data.label; + const confidence = data.confidence || data.score; + const badgeClass = sentiment?.toLowerCase() === "positive" ? "positive" : sentiment?.toLowerCase() === "negative" ? "negative" : "neutral"; + + // Calcul de la barre de progression pour la confiance + const confidencePercent = confidence ? (confidence * 100).toFixed(1) : 0; + const progressColor = confidencePercent > 80 ? "var(--success-500)" : confidencePercent > 60 ? "var(--warning-500)" : "var(--error-500)"; + + return ` +
+
+
+ Sentiment détecté: + ${sentiment || "Non déterminé"} +
+
+
+ Niveau de confiance: + ${confidencePercent}% +
+
+
+
+
+
+
+ ${getSentimentInterpretation(sentiment, confidence)} +
+
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatNerResult(data) { + let entitiesHtml = ""; + let entitiesStats = {}; + + if (data.entities && data.entities.length > 0) { + // Compter les entités par type + data.entities.forEach((entity) => { + const label = entity.label; + entitiesStats[label] = (entitiesStats[label] || 0) + 1; + }); + + entitiesHtml = data.entities + .map((entity, index) => { + const label = entity.label?.toLowerCase() || "misc"; + const confidence = entity.confidence ? (entity.confidence * 100).toFixed(1) : null; + return ` +
+ + ${entity.text} + + ${entity.label} + ${confidence ? `${confidence}%` : ""} +
+ `; + }) + .join(""); + } else { + entitiesHtml = `
🔍 Aucune entité nommée détectée dans ce texte
`; + } + + // Statistiques des entités + const statsHtml = + Object.keys(entitiesStats).length > 0 + ? ` +
+

📊 Statistiques des entités:

+
+ ${Object.entries(entitiesStats) + .map( + ([type, count]) => ` +
+ ${type} + ${count} +
+ ` + ) + .join("")} +
+
+ ` + : ""; + + return ` +
+
+

🏷️ Entités détectées:

+
+ ${entitiesHtml} +
+
+ ${statsHtml} +
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatQaResult(data) { + const confidence = data.confidence ? (data.confidence * 100).toFixed(1) : null; + const progressColor = confidence > 80 ? "var(--success-500)" : confidence > 60 ? "var(--warning-500)" : "var(--error-500)"; + + return ` +
+
+
+

❓ Question:

+

${data.question}

+
+
+

💡 Réponse:

+
${data.answer || "Aucune réponse trouvée dans le contexte fourni"}
+
+ ${ + confidence + ? ` +
+
+ Fiabilité de la réponse: + ${confidence}% +
+
+
+
+
+ ` + : "" + } +
+
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatFillmaskResult(data) { + let predictionsHtml = ""; + if (data.predictions && data.predictions.length > 0) { + predictionsHtml = data.predictions + .map((pred, index) => { + const score = pred.score ? (pred.score * 100).toFixed(1) : null; + const rankClass = index === 0 ? "rank-first" : index === 1 ? "rank-second" : "rank-other"; + return ` +
+
#${index + 1}
+
+ ${pred.token || pred.token_str} + ${score ? `${score}%` : ""} +
+ ${ + score + ? ` +
+
+
+ ` + : "" + } +
+ `; + }) + .join(""); + } else { + predictionsHtml = `
🔍 Aucune prédiction disponible
`; + } + + return ` +
+
+

🎭 Mots prédits pour remplacer [MASK]:

+
+ ${predictionsHtml} +
+
+
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatModerationResult(data) { + const flagged = data.flagged; + const badgeClass = flagged ? "negative" : "positive"; + const status = flagged ? "CONTENU SIGNALÉ" : "CONTENU APPROPRIÉ"; + const icon = flagged ? "⚠️" : "✅"; + + let categoriesHtml = ""; + if (data.categories) { + const categories = Object.entries(data.categories); + if (categories.length > 0) { + categoriesHtml = ` +
+

📋 Détails de l'analyse:

+
+ ${categories + .map(([key, value]) => { + let displayKey = key; + let displayValue = value; + + if (key === "toxic_score") { + displayKey = "Score de toxicité"; + displayValue = typeof value === "number" ? `${(value * 100).toFixed(1)}%` : value; + } else if (key === "is_modified") { + displayKey = "Contenu modifié"; + displayValue = value ? "Oui" : "Non"; + } else if (key === "words_replaced") { + displayKey = "Mots remplacés"; + } else if (key === "restored_text") { + displayKey = "Texte restauré"; + } + + return ` +
+ ${displayKey}: + ${displayValue} +
+ `; + }) + .join("")} +
+
+ `; + } + } + + return ` +
+
+ ${icon} + ${status} +
+ ${categoriesHtml} +
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatTextgenResult(data) { + const generatedText = data.generated_text || "Aucun texte généré"; + const prompt = data.prompt || ""; + + return ` +
+
+

📝 Prompt initial:

+
${prompt}
+
+
+

✨ Texte généré:

+
+
${generatedText}
+
+ + +
+
+
+
+ 🔍 Détails techniques +
${JSON.stringify(data, null, 2)}
+
+
+ `; +} + +function formatBatchResult(data) { + if (!data.results || data.results.length === 0) { + return "Aucun résultat"; + } + + const resultsHtml = data.results + .map((result, index) => { + return ` +
+
Résultat ${index + 1}
+
${JSON.stringify(result, null, 2)}
+
+ `; + }) + .join(""); + + return ` +
+ Résumé: ${data.processed_count} traités, ${data.failed_count} échecs +
+
+ ${resultsHtml} +
+ `; +} + +// Fonctions de traitement pour chaque endpoint +async function analyzeSentiment(event) { + event.preventDefault(); + showLoading("sentimentResult"); + + const text = document.getElementById("sentimentText").value.trim(); + const model = document.getElementById("sentimentModel").value; + + if (!text) { + showResult("sentimentResult", { error: "Le texte est requis" }, true); + return; + } + + try { + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/sentiment", data); + showResult("sentimentResult", result); + } catch (error) { + showResult("sentimentResult", { error: error.message }, true); + } +} + +async function analyzeNER(event) { + event.preventDefault(); + showLoading("nerResult"); + + const text = document.getElementById("nerText").value.trim(); + const model = document.getElementById("nerModel").value; + + if (!text) { + showResult("nerResult", { error: "Le texte est requis" }, true); + return; + } + + try { + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/ner", data); + showResult("nerResult", result); + } catch (error) { + showResult("nerResult", { error: error.message }, true); + } +} + +async function answerQuestion(event) { + event.preventDefault(); + showLoading("qaResult"); + + const question = document.getElementById("qaQuestion").value.trim(); + const context = document.getElementById("qaContext").value.trim(); + const model = document.getElementById("qaModel").value; + + if (!question || !context) { + showResult("qaResult", { error: "La question et le contexte sont requis" }, true); + return; + } + + try { + const data = { question, context }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/qa", data); + showResult("qaResult", result); + } catch (error) { + showResult("qaResult", { error: error.message }, true); + } +} + +async function fillMask(event) { + event.preventDefault(); + showLoading("fillmaskResult"); + + const text = document.getElementById("fillmaskText").value.trim(); + const model = document.getElementById("fillmaskModel").value; + + if (!text) { + showResult("fillmaskResult", { error: "Le texte est requis" }, true); + return; + } + + if (!text.includes("[MASK]")) { + showResult("fillmaskResult", { error: "Le texte doit contenir [MASK]" }, true); + return; + } + + try { + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/fillmask", data); + showResult("fillmaskResult", result); + } catch (error) { + showResult("fillmaskResult", { error: error.message }, true); + } +} + +async function moderateContent(event) { + event.preventDefault(); + showLoading("moderationResult"); + + const text = document.getElementById("moderationText").value.trim(); + const model = document.getElementById("moderationModel").value; + + if (!text) { + showResult("moderationResult", { error: "Le texte est requis" }, true); + return; + } + + try { + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/moderation", data); + showResult("moderationResult", result); + } catch (error) { + showResult("moderationResult", { error: error.message }, true); + } +} + +async function generateText(event) { + event.preventDefault(); + showLoading("textgenResult"); + + const text = document.getElementById("textgenPrompt").value.trim(); + const model = document.getElementById("textgenModel").value; + + if (!text) { + showResult("textgenResult", { error: "Le prompt est requis" }, true); + return; + } + + try { + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/textgen", data); + showResult("textgenResult", result); + } catch (error) { + showResult("textgenResult", { error: error.message }, true); + } +} + +async function processBatch() { + showLoading("batchResult"); + + const type = document.getElementById("batchType").value; + const textsInput = document.getElementById("batchTexts").value.trim(); + + if (!textsInput) { + showResult("batchResult", { error: "Les textes sont requis" }, true); + return; + } + + const texts = textsInput + .split("\n") + .filter((line) => line.trim()) + .map((line) => line.trim()); + + if (texts.length === 0) { + showResult("batchResult", { error: "Aucun texte valide fourni" }, true); + return; + } + + try { + const data = { texts }; + const result = await makeApiRequest(`/${type}/batch`, data); + showResult("batchResult", result); + } catch (error) { + showResult("batchResult", { error: error.message }, true); + } +} + +// Exemples prédéfinis (en anglais pour optimiser la compatibilité avec les modèles) +const examples = { + sentiment: "I love this project! It's really well designed and useful for testing NLP APIs.", + ner: "Apple Inc. is an American multinational technology company headquartered in Cupertino, California. Tim Cook is the current CEO of Apple.", + qa: { + question: "What is the capital of France?", + context: + "France is a country located in Western Europe. Paris is the capital and largest city of France. The city is famous for the Eiffel Tower and the Louvre Museum." + }, + fillmask: "The capital of France is [MASK].", + moderation: "This project is fantastic! Thank you for this excellent work.", + textgen: "Once upon a time, in a distant kingdom,", + batch: `I love this product! +This is really terrible. +Not bad at all. +Excellent work! +I hate all of this.` +}; + +function loadExample(type) { + const example = examples[type]; + + if (!example) return; + + switch (type) { + case "sentiment": + document.getElementById("sentimentText").value = example; + break; + case "ner": + document.getElementById("nerText").value = example; + break; + case "qa": + document.getElementById("qaQuestion").value = example.question; + document.getElementById("qaContext").value = example.context; + break; + case "fillmask": + document.getElementById("fillmaskText").value = example; + break; + case "moderation": + document.getElementById("moderationText").value = example; + break; + case "textgen": + document.getElementById("textgenPrompt").value = example; + break; + case "batch": + document.getElementById("batchTexts").value = example; + break; + } +} + +function loadExamplesData() { + // Cette fonction peut être étendue pour charger des exemples depuis une source externe + console.log("Exemples chargés"); +} + +// Fonctions utilitaires pour l'UX +function getSentimentInterpretation(sentiment, confidence) { + const confidenceLevel = confidence > 0.8 ? "très élevée" : confidence > 0.6 ? "élevée" : confidence > 0.4 ? "modérée" : "faible"; + + let interpretation = ""; + if (sentiment?.toLowerCase() === "positive") { + interpretation = `😊 Le texte exprime un sentiment positif avec une confiance ${confidenceLevel}.`; + } else if (sentiment?.toLowerCase() === "negative") { + interpretation = `😔 Le texte exprime un sentiment négatif avec une confiance ${confidenceLevel}.`; + } else { + interpretation = `😐 Le sentiment du texte est neutre avec une confiance ${confidenceLevel}.`; + } + + return `
${interpretation}
`; +} + +function copyToClipboard(text) { + navigator.clipboard + .writeText(text) + .then(() => { + showNotification("📋 Texte copié dans le presse-papiers!", "success", 2000); + }) + .catch(() => { + showNotification("❌ Impossible de copier le texte", "error", 2000); + }); +} + +function copyResultToClipboard(containerId) { + const container = document.getElementById(containerId); + const resultData = container.querySelector(".result-json"); + if (resultData) { + copyToClipboard(resultData.textContent); + } +} + +function toggleResultDetails(containerId) { + const container = document.getElementById(containerId); + const details = container.querySelector(".result-details"); + if (details) { + details.open = !details.open; + } +} + +function regenerateText() { + const currentPrompt = document.getElementById("textgenPrompt").value; + if (currentPrompt) { + const form = document.querySelector("#textgen form"); + const event = new Event("submit"); + generateText(event); + } +} + +// Amélioration des fonctions de traitement avec feedback +async function analyzeSentiment(event) { + event.preventDefault(); + showLoading("sentimentResult"); + + const text = document.getElementById("sentimentText").value.trim(); + const model = document.getElementById("sentimentModel").value; + + if (!text) { + showResult("sentimentResult", { error: "Le texte est requis pour l'analyse de sentiment" }, true); + return; + } + + try { + showNotification("🔄 Analyse de sentiment en cours...", "info", 2000); + + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/sentiment", data); + showResult("sentimentResult", result); + showNotification("✅ Analyse de sentiment terminée!", "success", 3000); + } catch (error) { + showResult("sentimentResult", { error: error.message }, true); + showNotification("❌ Erreur lors de l'analyse de sentiment", "error", 4000); + } +} + +async function analyzeNER(event) { + event.preventDefault(); + showLoading("nerResult"); + + const text = document.getElementById("nerText").value.trim(); + const model = document.getElementById("nerModel").value; + + if (!text) { + showResult("nerResult", { error: "Le texte est requis pour la reconnaissance d'entités" }, true); + return; + } + + try { + showNotification("🔄 Reconnaissance d'entités en cours...", "info", 2000); + + const data = { text }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/ner", data); + showResult("nerResult", result); + + const entityCount = result.entities ? result.entities.length : 0; + showNotification(`✅ ${entityCount} entité(s) détectée(s)!`, "success", 3000); + } catch (error) { + showResult("nerResult", { error: error.message }, true); + showNotification("❌ Erreur lors de la reconnaissance d'entités", "error", 4000); + } +} + +async function answerQuestion(event) { + event.preventDefault(); + showLoading("qaResult"); + + const question = document.getElementById("qaQuestion").value.trim(); + const context = document.getElementById("qaContext").value.trim(); + const model = document.getElementById("qaModel").value; + + if (!question || !context) { + showResult("qaResult", { error: "La question et le contexte sont requis" }, true); + return; + } + + try { + showNotification("🔄 Recherche de réponse en cours...", "info", 2000); + + const data = { question, context }; + if (model) data.model_name = model; + + const result = await makeApiRequest("/qa", data); + showResult("qaResult", result); + showNotification("✅ Réponse trouvée!", "success", 3000); + } catch (error) { + showResult("qaResult", { error: error.message }, true); + showNotification("❌ Erreur lors de la recherche de réponse", "error", 4000); + } +} diff --git a/ui/style.css b/ui/style.css new file mode 100644 index 0000000..e81901d --- /dev/null +++ b/ui/style.css @@ -0,0 +1,1436 @@ +/* Reset et variables CSS */ +:root { + /* Couleurs principales */ + --primary-50: #f0f9ff; + --primary-100: #e0f2fe; + --primary-500: #0ea5e9; + --primary-600: #0284c7; + --primary-700: #0369a1; + + /* Couleurs neutres */ + --gray-50: #f9fafb; + --gray-100: #f3f4f6; + --gray-200: #e5e7eb; + --gray-300: #d1d5db; + --gray-400: #9ca3af; + --gray-500: #6b7280; + --gray-600: #4b5563; + --gray-700: #374151; + --gray-800: #1f2937; + --gray-900: #111827; + + /* Couleurs de statut */ + --success-50: #f0fdf4; + --success-100: #dcfce7; + --success-500: #22c55e; + --success-600: #16a34a; + + --warning-50: #fffbeb; + --warning-100: #fef3c7; + --warning-500: #f59e0b; + --warning-600: #d97706; + + --error-50: #fef2f2; + --error-100: #fee2e2; + --error-500: #ef4444; + --error-600: #dc2626; + + /* Espacements */ + --space-1: 0.25rem; + --space-2: 0.5rem; + --space-3: 0.75rem; + --space-4: 1rem; + --space-6: 1.5rem; + --space-8: 2rem; + --space-12: 3rem; + + /* Border radius */ + --radius-sm: 0.375rem; + --radius: 0.5rem; + --radius-lg: 0.75rem; + --radius-xl: 1rem; + + /* Ombres */ + --shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05); + --shadow: 0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1); + --shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1); + --shadow-xl: 0 20px 25px -5px rgb(0 0 0 / 0.1), 0 8px 10px -6px rgb(0 0 0 / 0.1); + + /* Transitions */ + --transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1); +} + +/* Reset */ +* { + margin: 0; + padding: 0; + box-sizing: border-box; +} + +html { + font-size: 16px; + -webkit-text-size-adjust: 100%; +} + +body { + font-family: "Inter", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif; + line-height: 1.6; + color: var(--gray-700); + background-color: var(--gray-50); + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +/* Layout */ +.app { + min-height: 100vh; + display: flex; + flex-direction: column; +} + +.container { + max-width: 1200px; + margin: 0 auto; + padding: 0 var(--space-4); + width: 100%; +} + +/* Header */ +.header { + background: white; + border-bottom: 1px solid var(--gray-200); + box-shadow: var(--shadow-sm); + position: sticky; + top: 0; + z-index: 100; +} + +.header-content { + display: flex; + justify-content: space-between; + align-items: center; + padding: var(--space-4) 0; +} + +.logo h1 { + font-size: 1.5rem; + font-weight: 700; + color: var(--primary-600); + margin-bottom: var(--space-1); +} + +.logo p { + color: var(--gray-500); + font-size: 0.875rem; +} + +.api-status { + display: flex; + align-items: center; + gap: var(--space-2); + padding: var(--space-2) var(--space-3); + background: var(--gray-50); + border-radius: var(--radius); + font-size: 0.875rem; + font-weight: 500; +} + +.status-indicator { + width: 8px; + height: 8px; + border-radius: 50%; + transition: var(--transition); +} + +.status-indicator.online { + background: var(--success-500); + box-shadow: 0 0 0 2px var(--success-100); +} + +.status-indicator.offline { + background: var(--error-500); + box-shadow: 0 0 0 2px var(--error-100); +} + +.status-indicator.loading { + background: var(--warning-500); + box-shadow: 0 0 0 2px var(--warning-100); + animation: pulse 2s infinite; +} + +@keyframes pulse { + 0%, + 100% { + opacity: 1; + } + 50% { + opacity: 0.5; + } +} + +/* Système de notifications */ +.notification-container { + position: fixed; + top: var(--space-4); + right: var(--space-4); + z-index: 1000; + max-width: 400px; +} + +.notification { + background: white; + border-radius: var(--radius-lg); + box-shadow: var(--shadow-xl); + margin-bottom: var(--space-2); + overflow: hidden; + transition: all 0.3s ease; + border-left: 4px solid; +} + +.notification-info { + border-left-color: var(--primary-500); +} + +.notification-success { + border-left-color: var(--success-500); +} + +.notification-warning { + border-left-color: var(--warning-500); +} + +.notification-error { + border-left-color: var(--error-500); +} + +.notification-content { + display: flex; + align-items: flex-start; + padding: var(--space-4); + gap: var(--space-3); +} + +.notification-message { + flex: 1; + font-size: 0.875rem; + line-height: 1.5; + color: var(--gray-700); +} + +.notification-close { + background: none; + border: none; + color: var(--gray-400); + cursor: pointer; + font-size: 1.25rem; + line-height: 1; + padding: 0; + width: 20px; + height: 20px; + display: flex; + align-items: center; + justify-content: center; + border-radius: var(--radius-sm); + transition: var(--transition); +} + +.notification-close:hover { + color: var(--gray-600); + background: var(--gray-100); +} + +/* Détails de l'API */ +.api-details { + margin-top: var(--space-4); + background: var(--success-50); + border: 1px solid var(--success-200); + border-radius: var(--radius); + padding: var(--space-4); + animation: slideDown 0.3s ease; +} + +@keyframes slideDown { + from { + opacity: 0; + transform: translateY(-10px); + } + to { + opacity: 1; + transform: translateY(0); + } +} + +.api-details-header { + display: flex; + justify-content: space-between; + align-items: center; + margin-bottom: var(--space-3); +} + +.api-details-header h4 { + color: var(--success-700); + font-size: 0.875rem; + font-weight: 600; + margin: 0; +} + +.btn-minimal { + background: none; + border: none; + color: var(--gray-500); + cursor: pointer; + font-size: 1rem; + padding: var(--space-1); + border-radius: var(--radius-sm); + transition: var(--transition); +} + +.btn-minimal:hover { + color: var(--gray-700); + background: var(--gray-100); +} + +.api-details-content { + display: flex; + flex-direction: column; + gap: var(--space-2); +} + +.detail-item { + display: flex; + justify-content: space-between; + align-items: center; + font-size: 0.875rem; +} + +.detail-label { + color: var(--gray-600); + font-weight: 500; +} + +.detail-value { + color: var(--gray-700); + font-weight: 400; +} + +.detail-value.success { + color: var(--success-600); +} + +/* Améliorations de chargement */ +.loading-container { + display: flex; + align-items: center; + gap: var(--space-4); + padding: var(--space-6); + text-align: center; + background: linear-gradient(135deg, var(--primary-50) 0%, var(--gray-50) 100%); + border-radius: var(--radius-lg); + border: 1px solid var(--primary-100); +} + +.loading-spinner { + display: flex; + align-items: center; + justify-content: center; +} + +.loading-content { + flex: 1; + text-align: left; +} + +.loading-content h4 { + color: var(--primary-700); + margin-bottom: var(--space-2); + font-size: 1.125rem; +} + +.loading-content p { + color: var(--gray-600); + font-size: 0.875rem; + margin-bottom: var(--space-3); +} + +.loading-progress { + background: var(--gray-200); + height: 4px; + border-radius: 2px; + overflow: hidden; +} + +.progress-bar { + height: 100%; + background: linear-gradient(90deg, var(--primary-500), var(--primary-600)); + border-radius: 2px; + animation: progressSlide 2s ease-in-out infinite; +} + +@keyframes progressSlide { + 0% { + transform: translateX(-100%); + } + 100% { + transform: translateX(400%); + } +} + +/* Main */ +.main { + flex: 1; + padding: var(--space-8) 0; +} + +/* Configuration */ +.config-section { + margin-bottom: var(--space-8); +} + +.config-card { + background: white; + border: 1px solid var(--gray-200); + border-radius: var(--radius-xl); + padding: var(--space-6); + box-shadow: var(--shadow-sm); +} + +.config-card h3 { + font-size: 1.125rem; + font-weight: 600; + color: var(--gray-900); + margin-bottom: var(--space-4); +} + +.config-form { + display: flex; + gap: var(--space-4); + align-items: end; +} + +.config-form .input-group { + flex: 1; +} + +/* Navigation */ +.nav-section { + margin-bottom: var(--space-8); +} + +.nav-tabs { + display: flex; + gap: var(--space-2); + background: white; + border: 1px solid var(--gray-200); + border-radius: var(--radius-xl); + padding: var(--space-1); + box-shadow: var(--shadow-sm); + overflow-x: auto; +} + +.nav-tab { + flex: 1; + min-width: 120px; + display: flex; + align-items: center; + justify-content: center; + gap: var(--space-2); + padding: var(--space-3) var(--space-4); + border: none; + background: transparent; + border-radius: var(--radius-lg); + cursor: pointer; + font-weight: 500; + font-size: 0.875rem; + color: var(--gray-600); + transition: var(--transition); + white-space: nowrap; +} + +.nav-tab:hover { + background: var(--gray-50); + color: var(--gray-900); +} + +.nav-tab.active { + background: var(--primary-500); + color: white; + box-shadow: var(--shadow); +} + +.tab-icon { + font-size: 1rem; +} + +/* Content */ +.tab-contents { + position: relative; +} + +.tab-content { + display: none; +} + +.tab-content.active { + display: block; + animation: fadeIn 0.3s ease-out; +} + +@keyframes fadeIn { + from { + opacity: 0; + transform: translateY(8px); + } + to { + opacity: 1; + transform: translateY(0); + } +} + +.content-card { + background: white; + border: 1px solid var(--gray-200); + border-radius: var(--radius-xl); + box-shadow: var(--shadow-sm); + overflow: hidden; +} + +.card-header { + padding: var(--space-6); + border-bottom: 1px solid var(--gray-200); + background: var(--gray-50); +} + +.card-header h2 { + font-size: 1.25rem; + font-weight: 600; + color: var(--gray-900); + margin-bottom: var(--space-2); +} + +.card-header p { + color: var(--gray-600); + margin: 0; +} + +/* Forms */ +.form { + padding: var(--space-6); +} + +.form-group { + margin-bottom: var(--space-6); +} + +.form-row { + display: grid; + grid-template-columns: 1fr 1fr; + gap: var(--space-4); +} + +.input-group { + display: flex; + flex-direction: column; +} + +label { + display: block; + font-weight: 500; + color: var(--gray-700); + margin-bottom: var(--space-2); + font-size: 0.875rem; +} + +input, +textarea, +select { + width: 100%; + padding: var(--space-3) var(--space-4); + border: 1px solid var(--gray-300); + border-radius: var(--radius); + font-size: 0.875rem; + color: var(--gray-700); + background: white; + transition: var(--transition); +} + +input:focus, +textarea:focus, +select:focus { + outline: none; + border-color: var(--primary-500); + box-shadow: 0 0 0 3px rgba(14, 165, 233, 0.1); +} + +textarea { + resize: vertical; + min-height: 100px; +} + +select { + cursor: pointer; +} + +.form-actions { + display: flex; + gap: var(--space-3); + justify-content: flex-end; + margin-top: var(--space-6); +} + +/* Buttons */ +.btn { + display: inline-flex; + align-items: center; + gap: var(--space-2); + padding: var(--space-3) var(--space-4); + border: 1px solid transparent; + border-radius: var(--radius); + font-weight: 500; + font-size: 0.875rem; + text-decoration: none; + cursor: pointer; + transition: var(--transition); + white-space: nowrap; +} + +.btn:disabled { + opacity: 0.6; + cursor: not-allowed; +} + +.btn-primary { + background: var(--primary-500); + color: white; + border-color: var(--primary-500); +} + +.btn-primary:hover:not(:disabled) { + background: var(--primary-600); + border-color: var(--primary-600); + transform: translateY(-1px); + box-shadow: var(--shadow); +} + +.btn-secondary { + background: white; + color: var(--gray-700); + border-color: var(--gray-300); +} + +.btn-secondary:hover:not(:disabled) { + background: var(--gray-50); + border-color: var(--gray-400); +} + +.btn-icon { + font-size: 1rem; +} + +/* Results */ +.result-container { + margin-top: var(--space-6); + display: none; +} + +.result-container.show { + display: block; + animation: fadeIn 0.3s ease-out; +} + +.result-card { + border: 1px solid var(--gray-200); + border-radius: var(--radius-lg); + overflow: hidden; +} + +.result-header { + display: flex; + align-items: center; + gap: var(--space-2); + padding: var(--space-4); + font-weight: 600; + font-size: 0.875rem; +} + +.result-header.success { + background: var(--success-50); + color: var(--success-600); + border-bottom: 1px solid var(--success-100); +} + +.result-header.error { + background: var(--error-50); + color: var(--error-600); + border-bottom: 1px solid var(--error-100); +} + +.result-content { + padding: var(--space-4); + background: white; +} + +.result-json { + background: var(--gray-50); + border: 1px solid var(--gray-200); + border-radius: var(--radius); + padding: var(--space-4); + font-family: "Monaco", "Menlo", "Ubuntu Mono", monospace; + font-size: 0.8rem; + line-height: 1.5; + white-space: pre-wrap; + overflow-x: auto; + color: var(--gray-700); +} + +/* Badges et entités */ +.badge { + display: inline-flex; + align-items: center; + padding: var(--space-1) var(--space-2); + border-radius: var(--radius-sm); + font-size: 0.75rem; + font-weight: 500; + margin: var(--space-1); +} + +.badge.positive { + background: var(--success-100); + color: var(--success-600); +} + +.badge.negative { + background: var(--error-100); + color: var(--error-600); +} + +.badge.neutral { + background: var(--gray-100); + color: var(--gray-600); +} + +.entity { + display: inline-flex; + align-items: center; + padding: var(--space-1) var(--space-2); + border-radius: var(--radius-sm); + font-size: 0.75rem; + font-weight: 500; + margin: var(--space-1); +} + +.entity.person { + background: rgba(59, 130, 246, 0.1); + color: #1d4ed8; +} + +.entity.org { + background: rgba(245, 158, 11, 0.1); + color: #b45309; +} + +.entity.loc { + background: rgba(34, 197, 94, 0.1); + color: #15803d; +} + +.entity.misc { + background: rgba(168, 85, 247, 0.1); + color: #7c3aed; +} + +/* Batch results */ +.batch-results { + display: grid; + gap: var(--space-4); +} + +.batch-item { + background: var(--gray-50); + border: 1px solid var(--gray-200); + border-radius: var(--radius); + padding: var(--space-4); +} + +.batch-item-header { + font-weight: 500; + color: var(--gray-900); + margin-bottom: var(--space-2); + font-size: 0.875rem; +} + +/* Loading */ +.loading { + display: flex; + flex-direction: column; + align-items: center; + padding: var(--space-8); + color: var(--gray-500); +} + +.spinner { + width: 32px; + height: 32px; + border: 3px solid var(--gray-200); + border-top: 3px solid var(--primary-500); + border-radius: 50%; + animation: spin 1s linear infinite; + margin-bottom: var(--space-4); +} + +@keyframes spin { + 0% { + transform: rotate(0deg); + } + 100% { + transform: rotate(360deg); + } +} + +/* Responsive */ +@media (max-width: 768px) { + .container { + padding: 0 var(--space-3); + } + + .header-content { + flex-direction: column; + gap: var(--space-4); + text-align: center; + } + + .nav-tabs { + flex-wrap: wrap; + } + + .nav-tab { + flex: none; + min-width: auto; + } + + .form-row { + grid-template-columns: 1fr; + } + + .config-form { + flex-direction: column; + } + + .form-actions { + flex-direction: column-reverse; + } + + .btn { + justify-content: center; + } +} + +@media (max-width: 480px) { + .container { + padding: 0 var(--space-2); + } + + .main { + padding: var(--space-4) 0; + } + + .config-card, + .content-card .form, + .card-header { + padding: var(--space-4); + } +} + +/* Styles pour les résultats améliorés */ +.result-header { + display: flex; + justify-content: space-between; + align-items: flex-start; + padding: var(--space-4); + border-bottom: 1px solid var(--gray-200); +} + +.result-header.success { + background: linear-gradient(135deg, var(--success-50) 0%, var(--success-100) 100%); + color: var(--success-800); +} + +.result-header.error { + background: linear-gradient(135deg, var(--error-50) 0%, var(--error-100) 100%); + color: var(--error-800); +} + +.result-header-main { + display: flex; + align-items: center; + gap: var(--space-3); +} + +.result-icon { + font-size: 1.25rem; +} + +.result-title .title { + font-weight: 600; + font-size: 1rem; + display: block; +} + +.result-title .timestamp { + font-size: 0.75rem; + opacity: 0.7; + display: block; + margin-top: 2px; +} + +.result-actions { + display: flex; + gap: var(--space-1); +} + +.result-content { + padding: var(--space-4); +} + +/* Sentiment result styles */ +.sentiment-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.sentiment-main { + display: flex; + flex-direction: column; + gap: var(--space-3); +} + +.sentiment-label { + display: flex; + align-items: center; + gap: var(--space-3); +} + +.label-title { + font-weight: 600; + color: var(--gray-700); +} + +.confidence-section { + display: flex; + flex-direction: column; + gap: var(--space-2); +} + +.confidence-header { + display: flex; + justify-content: space-between; + align-items: center; +} + +.confidence-title { + font-weight: 600; + color: var(--gray-700); + font-size: 0.875rem; +} + +.confidence-value { + font-weight: 700; + color: var(--primary-600); +} + +.confidence-bar { + background: var(--gray-200); + height: 8px; + border-radius: 4px; + overflow: hidden; +} + +.confidence-progress { + height: 100%; + border-radius: 4px; + transition: width 0.6s ease; +} + +.interpretation { + background: var(--gray-50); + padding: var(--space-3); + border-radius: var(--radius); + border-left: 3px solid var(--primary-500); + font-style: italic; + color: var(--gray-700); +} + +/* NER result styles */ +.ner-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.entities-container { + display: flex; + flex-wrap: wrap; + gap: var(--space-2); + margin-top: var(--space-2); +} + +.entity-item { + display: flex; + align-items: center; + gap: var(--space-2); + background: var(--gray-50); + padding: var(--space-2) var(--space-3); + border-radius: var(--radius); + border: 1px solid var(--gray-200); +} + +.entity { + font-weight: 600; + padding: var(--space-1) var(--space-2); + border-radius: var(--radius-sm); + font-size: 0.875rem; +} + +.entity.person { + background: var(--primary-100); + color: var(--primary-800); +} +.entity.per { + background: var(--primary-100); + color: var(--primary-800); +} +.entity.organization { + background: var(--warning-100); + color: var(--warning-800); +} +.entity.org { + background: var(--warning-100); + color: var(--warning-800); +} +.entity.location { + background: var(--success-100); + color: var(--success-800); +} +.entity.loc { + background: var(--success-100); + color: var(--success-800); +} +.entity.misc { + background: var(--gray-100); + color: var(--gray-800); +} + +.entity-label { + font-size: 0.75rem; + color: var(--gray-600); + font-weight: 500; +} + +.entity-confidence { + font-size: 0.75rem; + color: var(--gray-500); +} + +.entities-stats { + background: var(--gray-50); + padding: var(--space-3); + border-radius: var(--radius); +} + +.stats-grid { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(120px, 1fr)); + gap: var(--space-2); + margin-top: var(--space-2); +} + +.stat-item { + display: flex; + justify-content: space-between; + align-items: center; + padding: var(--space-2); + background: white; + border-radius: var(--radius-sm); + font-size: 0.875rem; +} + +.stat-type { + font-weight: 500; + color: var(--gray-700); +} + +.stat-count { + font-weight: 700; + color: var(--primary-600); +} + +.no-entities { + text-align: center; + color: var(--gray-500); + font-style: italic; + padding: var(--space-4); + background: var(--gray-50); + border-radius: var(--radius); +} + +/* QA result styles */ +.qa-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.qa-question, +.qa-answer { + background: var(--gray-50); + padding: var(--space-3); + border-radius: var(--radius); +} + +.qa-question h4, +.qa-answer h4 { + margin-bottom: var(--space-2); + color: var(--gray-700); + font-size: 0.875rem; + font-weight: 600; +} + +.question-text { + color: var(--gray-800); + font-weight: 500; +} + +.answer-text { + color: var(--gray-800); + font-weight: 500; + background: white; + padding: var(--space-3); + border-radius: var(--radius-sm); + border-left: 3px solid var(--primary-500); +} + +.qa-confidence { + display: flex; + flex-direction: column; + gap: var(--space-2); +} + +/* Fillmask result styles */ +.fillmask-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.predictions-container { + display: flex; + flex-direction: column; + gap: var(--space-2); + margin-top: var(--space-2); +} + +.prediction-item { + display: flex; + align-items: center; + gap: var(--space-3); + padding: var(--space-3); + background: var(--gray-50); + border-radius: var(--radius); + border: 1px solid var(--gray-200); +} + +.prediction-item.rank-first { + background: linear-gradient(135deg, var(--success-50) 0%, var(--success-100) 100%); + border-color: var(--success-200); +} + +.prediction-item.rank-second { + background: linear-gradient(135deg, var(--warning-50) 0%, var(--warning-100) 100%); + border-color: var(--warning-200); +} + +.prediction-rank { + font-weight: 700; + color: var(--primary-600); + font-size: 0.875rem; + min-width: 24px; +} + +.prediction-content { + flex: 1; + display: flex; + justify-content: space-between; + align-items: center; +} + +.prediction-token { + font-weight: 600; + color: var(--gray-800); +} + +.prediction-score { + font-weight: 500; + color: var(--gray-600); + font-size: 0.875rem; +} + +.prediction-bar { + flex: 1; + background: var(--gray-200); + height: 4px; + border-radius: 2px; + margin-left: var(--space-3); + overflow: hidden; +} + +.prediction-progress { + height: 100%; + background: var(--primary-500); + border-radius: 2px; + transition: width 0.6s ease; +} + +.no-predictions { + text-align: center; + color: var(--gray-500); + font-style: italic; + padding: var(--space-4); + background: var(--gray-50); + border-radius: var(--radius); +} + +/* Moderation result styles */ +.moderation-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.moderation-status { + display: flex; + align-items: center; + gap: var(--space-3); + padding: var(--space-4); + border-radius: var(--radius); + font-weight: 600; +} + +.moderation-status.positive { + background: linear-gradient(135deg, var(--success-50) 0%, var(--success-100) 100%); + color: var(--success-800); + border: 1px solid var(--success-200); +} + +.moderation-status.negative { + background: linear-gradient(135deg, var(--error-50) 0%, var(--error-100) 100%); + color: var(--error-800); + border: 1px solid var(--error-200); +} + +.status-icon { + font-size: 1.25rem; +} + +.moderation-details { + background: var(--gray-50); + padding: var(--space-3); + border-radius: var(--radius); +} + +.moderation-categories { + display: flex; + flex-direction: column; + gap: var(--space-2); + margin-top: var(--space-2); +} + +.category-item { + display: flex; + justify-content: space-between; + align-items: center; + padding: var(--space-2); + background: white; + border-radius: var(--radius-sm); + font-size: 0.875rem; +} + +.category-label { + font-weight: 500; + color: var(--gray-700); +} + +.category-value { + font-weight: 600; + color: var(--gray-800); +} + +/* Text generation result styles */ +.textgen-result { + display: flex; + flex-direction: column; + gap: var(--space-4); +} + +.textgen-prompt { + background: var(--gray-50); + padding: var(--space-3); + border-radius: var(--radius); +} + +.textgen-prompt h4 { + margin-bottom: var(--space-2); + color: var(--gray-700); + font-size: 0.875rem; + font-weight: 600; +} + +.prompt-text { + color: var(--gray-800); + font-style: italic; +} + +.textgen-output { + background: var(--primary-50); + padding: var(--space-3); + border-radius: var(--radius); + border: 1px solid var(--primary-200); +} + +.textgen-output h4 { + margin-bottom: var(--space-2); + color: var(--primary-700); + font-size: 0.875rem; + font-weight: 600; +} + +.generated-text { + background: white; + padding: var(--space-4); + border-radius: var(--radius-sm); + border-left: 3px solid var(--primary-500); +} + +.text-content { + color: var(--gray-800); + line-height: 1.7; + margin-bottom: var(--space-3); +} + +.text-actions { + display: flex; + gap: var(--space-2); + justify-content: flex-end; +} + +/* Error result styles */ +.error-content { + display: flex; + flex-direction: column; + gap: var(--space-3); +} + +.error-message { + background: var(--error-50); + padding: var(--space-3); + border-radius: var(--radius); + border-left: 3px solid var(--error-500); +} + +.error-message strong { + color: var(--error-700); + display: block; + margin-bottom: var(--space-1); +} + +.error-message p { + color: var(--error-800); + margin: 0; +} + +.error-suggestions { + background: var(--warning-50); + padding: var(--space-3); + border-radius: var(--radius); + border-left: 3px solid var(--warning-500); +} + +.error-suggestions h4 { + color: var(--warning-700); + margin-bottom: var(--space-2); + font-size: 0.875rem; +} + +.error-suggestions ul { + margin: 0; + padding-left: var(--space-4); +} + +.error-suggestions li { + color: var(--warning-800); + margin-bottom: var(--space-1); + font-size: 0.875rem; +} + +/* Détails techniques */ +.result-details { + margin-top: var(--space-3); + border: 1px solid var(--gray-200); + border-radius: var(--radius); +} + +.result-details summary { + padding: var(--space-3); + background: var(--gray-50); + cursor: pointer; + font-weight: 500; + color: var(--gray-700); + font-size: 0.875rem; + border-radius: var(--radius); +} + +.result-details summary:hover { + background: var(--gray-100); +} + +.result-details[open] summary { + border-bottom: 1px solid var(--gray-200); + border-radius: var(--radius) var(--radius) 0 0; +} + +.result-json { + padding: var(--space-3); + background: var(--gray-900); + color: var(--gray-100); + font-family: "Courier New", monospace; + font-size: 0.75rem; + white-space: pre-wrap; + overflow-x: auto; + border-radius: 0 0 var(--radius) var(--radius); +} + +/* Loading states */ +.btn.loading { + position: relative; + color: transparent !important; +} + +.btn.loading::after { + content: ""; + position: absolute; + top: 50%; + left: 50%; + width: 16px; + height: 16px; + margin-top: -8px; + margin-left: -8px; + border: 2px solid transparent; + border-top: 2px solid currentColor; + border-radius: 50%; + animation: spin 1s linear infinite; +} + +@keyframes spin { + to { + transform: rotate(360deg); + } +}