diff --git a/README.md b/README.md index ecabe4d..853b525 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,12 @@

About • + FeaturesPrerequisitesInstallationUsage • - License • - Disclaimer + API • + License

@@ -32,169 +33,141 @@ ## About -xtream2m3u is a powerful and flexible tool designed to bridge the gap between Xtream API-based IPTV services and M3U playlist-compatible media players. It provides a simple API that fetches live streams from Xtream IPTV services, filters out unwanted channel groups, and generates a customized M3U playlist file. +**xtream2m3u** is a powerful and flexible tool designed to bridge the gap between Xtream API-based IPTV services and M3U playlist-compatible media players. It offers a **user-friendly web interface** and a **comprehensive API** to generate customized playlists. -### Why xtream2m3u? +Many IPTV providers use the Xtream API, which isn't directly compatible with all players. xtream2m3u allows you to: +1. Connect to your Xtream IPTV provider. +2. Select exactly which channel groups (Live TV) or VOD categories (Movies/Series) you want. +3. Generate a standard M3U playlist compatible with almost any player (VLC, TiviMate, Televizo, etc.). -Many IPTV providers use the Xtream API, which isn't directly compatible with media players that accept M3U playlists. xtream2m3u solves this problem by: +## Features -1. Connecting to Xtream API-based IPTV services -2. Fetching the list of available live streams -3. Allowing users to filter channels by including only wanted groups or excluding unwanted groups -4. Generating a standard M3U playlist that's compatible with a wide range of media players +* **Web Interface:** Easy-to-use UI for managing credentials and selecting categories. +* **Custom Playlists:** Filter channels by including or excluding specific groups. +* **VOD Support:** Optionally include Movies and Series in your playlist. +* **Stream Proxying:** built-in proxy to handle CORS issues or hide upstream URLs. +* **Custom DNS:** Uses reliable DNS resolvers (Cloudflare, Google) to ensure connection stability. +* **XMLTV EPG:** Generates a compatible XMLTV guide for your playlist. +* **Docker Ready:** Simple deployment with Docker and Docker Compose. ## Prerequisites To use xtream2m3u, you'll need: +* An active subscription to an IPTV service that uses the Xtream API. -- An active subscription to an IPTV service that uses the Xtream API - -For deployment, you'll need one of the following: - -- Docker and Docker Compose -- Python 3.12 or higher - -## Environment Variables - -The application supports the following environment variables: - -- `PROXY_URL`: [Optional] Set a default custom base URL for all proxied content (can be overridden with the `proxy_url` parameter) +For deployment: +* **Docker & Docker Compose** (Recommended) +* OR **Python 3.9+** ## Installation ### Using Docker (Recommended) -1. Install Docker and Docker Compose -2. Clone the repository: - ``` - git clone https://github.com/ovosimpatico/xtream2m3u.git - cd xtream2m3u - ``` -3. Run the application: - ``` - docker-compose up -d - ``` +1. Clone the repository: + ```bash + git clone https://github.com/ovosimpatico/xtream2m3u.git + cd xtream2m3u + ``` +2. Run the application: + ```bash + docker-compose up -d + ``` +3. Open your browser and navigate to `http://localhost:5000`. ### Native Python Installation -1. Install Python (3.9 or higher) -2. Clone the repository: - ``` - git clone https://github.com/ovosimpatico/xtream2m3u.git - cd xtream2m3u - ``` -3. Create a virtual environment: - ``` - python -m venv venv - source venv/bin/activate # On Windows, use `venv\Scripts\activate` - ``` -4. Install the required packages: - ``` - pip install -r requirements.txt - ``` -5. Run the application: - ``` - python run.py - ``` +1. Clone the repository and enter the directory: + ```bash + git clone https://github.com/ovosimpatico/xtream2m3u.git + cd xtream2m3u + ``` +2. Create and activate a virtual environment: + ```bash + python -m venv venv + source venv/bin/activate # On Windows: venv\Scripts\activate + ``` +3. Install dependencies: + ```bash + pip install -r requirements.txt + ``` +4. Run the server: + ```bash + python run.py + ``` +5. Open your browser and navigate to `http://localhost:5000`. ## Usage -### API Endpoints +### Web Interface +The easiest way to use xtream2m3u is via the web interface at `http://localhost:5000`. +1. **Enter Credentials:** Input your IPTV provider's URL, username, and password. +2. **Select Content:** Choose whether to include VOD (Movies & Series). +3. **Filter Categories:** Load categories and select which ones to include or exclude. +4. **Generate:** Click "Generate Playlist" to download your custom M3U file. -The application provides several endpoints for generating playlists and proxying media: +### Environment Variables +* `PROXY_URL`: [Optional] Set a custom base URL for proxied content (useful if running behind a reverse proxy). +* `PORT`: [Optional] Port to run the server on (default: 5000). -#### M3U Playlist Generation +## API Documentation +For advanced users or automation, you can use the API endpoints directly. + +### 1. Generate M3U Playlist +`GET /m3u` or `POST /m3u` + +| Parameter | Type | Required | Description | +| :--- | :--- | :--- | :--- | +| `url` | string | Yes | IPTV Service URL | +| `username` | string | Yes | IPTV Username | +| `password` | string | Yes | IPTV Password | +| `unwanted_groups` | string | No | Comma-separated list of groups to **exclude** | +| `wanted_groups` | string | No | Comma-separated list of groups to **include** (takes precedence) | +| `include_vod` | boolean | No | Set `true` to include Movies & Series (default: `false`) | +| `nostreamproxy` | boolean | No | Set `true` to disable stream proxying (direct links) | +| `proxy_url` | string | No | Custom base URL for proxied streams | +| `include_channel_id` | boolean | No | Set `true` to include `epg_channel_id` tag | +| `channel_id_tag` | string | No | Custom tag name for channel ID (default: `channel-id`) | + +**Wildcard Support:** `unwanted_groups` and `wanted_groups` support `*` (wildcard) and `?` (single char). +* Example: `*Sports*` matches "Sky Sports", "BeIN Sports", etc. + +**Example:** ``` -GET /m3u +http://localhost:5000/m3u?url=http://iptv.com&username=user&password=pass&wanted_groups=Sports*,News&include_vod=true ``` -##### Query Parameters +### 2. Generate XMLTV Guide +`GET /xmltv` -- `url` (required): The base URL of your IPTV service -- `username` (required): Your IPTV service username -- `password` (required): Your IPTV service password -- `unwanted_groups` (optional): A comma-separated list of group names to exclude -- `wanted_groups` (optional): A comma-separated list of group names to include (takes precedence over unwanted_groups) -- `nostreamproxy` (optional): Set to 'true' to disable stream proxying -- `proxy_url` (optional): Custom base URL for proxied content (overrides auto-detection) -- `include_channel_id` (optional): Set to 'true' to include `epg_channel_id` in M3U, useful for [Channels](https://getchannels.com) -- `channel_id_tag` (optional): Name of the tag to use for `epg_channel_id` data in M3U, defaults to `channel-id` +| Parameter | Type | Required | Description | +| :--- | :--- | :--- | :--- | +| `url` | string | Yes | IPTV Service URL | +| `username` | string | Yes | IPTV Username | +| `password` | string | Yes | IPTV Password | +| `proxy_url` | string | No | Custom base URL for proxied images | -Note: For `unwanted_groups` and `wanted_groups`, you can use wildcard patterns with `*` and `?` characters. For example: -- `US*` will match all groups starting with "US" -- `*Sports*` will match any group containing "Sports" -- `US| ?/?/?` will match groups like "US| 24/7" +### 3. Get Categories +`GET /categories` -##### Example Request +Returns a JSON list of all available categories. -``` -http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&unwanted_groups=news,sports -``` +| Parameter | Type | Required | Description | +| :--- | :--- | :--- | :--- | +| `url` | string | Yes | IPTV Service URL | +| `username` | string | Yes | IPTV Username | +| `password` | string | Yes | IPTV Password | +| `include_vod` | boolean | No | Set `true` to include VOD categories | -Or to only include specific groups: - -``` -http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&wanted_groups=movies,series -``` - -With a custom proxy URL: - -``` -http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&proxy_url=https://your-public-domain.com -``` - -#### XMLTV Guide Generation - -``` -GET /xmltv -``` - -##### Query Parameters - -- `url` (required): The base URL of your IPTV service -- `username` (required): Your IPTV service username -- `password` (required): Your IPTV service password -- `proxy_url` (optional): Custom base URL for proxied content (overrides auto-detection) - - -##### Example Request - -``` -http://localhost:5000/xmltv?url=http://your-iptv-service.com&username=your_username&password=your_password -``` - -With a custom proxy URL: - -``` -http://localhost:5000/xmltv?url=http://your-iptv-service.com&username=your_username&password=your_password&proxy_url=https://your-public-domain.com -``` - -#### Image Proxy - -``` -GET /image-proxy/ -``` - -Proxies image requests, like channel logos and EPG images. - -#### Stream Proxy - -``` -GET /stream-proxy/ -``` - -Proxies video streams. Supports the following formats: -- MPEG-TS (.ts) -- HLS (.m3u8) -- Generic video streams +### 4. Proxy Endpoints +* `GET /image-proxy/`: Proxies images (logos, covers). +* `GET /stream-proxy/`: Proxies video streams. ## License -This project is licensed under the GNU Affero General Public License v3.0 (AGPLv3). This license requires that any modifications to the code must also be made available under the same license, even when the software is run as a service (e.g., over a network). See the [LICENSE](LICENSE) file for details. +This project is licensed under the **GNU Affero General Public License v3.0 (AGPLv3)**. +See the [LICENSE](LICENSE) file for details. ## Disclaimer -xtream2m3u is a tool for generating M3U playlists from Xtream API-based IPTV services but does not provide IPTV services itself. A valid subscription to an IPTV service using the Xtream API is required to use this tool. - -xtream2m3u does not endorse piracy and requires users to ensure they have the necessary rights and permissions. The developers are not responsible for any misuse of the software or violations of IPTV providers' terms of service. \ No newline at end of file +xtream2m3u is a tool for managing your own legal IPTV subscriptions. It **does not** provide any content, channels, or streams. The developers are not responsible for how this tool is used. diff --git a/app/__init__.py b/app/__init__.py new file mode 100644 index 0000000..1df9591 --- /dev/null +++ b/app/__init__.py @@ -0,0 +1,32 @@ +"""Flask application factory and configuration""" +import logging +import os + +from flask import Flask + +# Configure logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + + +def create_app(): + """Create and configure the Flask application""" + app = Flask(__name__, + static_folder='../frontend', + template_folder='../frontend') + + # Get default proxy URL from environment variable + app.config['DEFAULT_PROXY_URL'] = os.environ.get("PROXY_URL") + + # Register blueprints + from app.routes.api import api_bp + from app.routes.proxy import proxy_bp + from app.routes.static import static_bp + + app.register_blueprint(static_bp) + app.register_blueprint(proxy_bp) + app.register_blueprint(api_bp) + + logger.info("Flask application created and configured") + + return app diff --git a/app/routes/__init__.py b/app/routes/__init__.py new file mode 100644 index 0000000..a9cfe05 --- /dev/null +++ b/app/routes/__init__.py @@ -0,0 +1,6 @@ +"""Routes package - Register blueprints here""" +from .api import api_bp +from .proxy import proxy_bp +from .static import static_bp + +__all__ = ['static_bp', 'proxy_bp', 'api_bp'] diff --git a/app/routes/api.py b/app/routes/api.py new file mode 100644 index 0000000..ea1e1b5 --- /dev/null +++ b/app/routes/api.py @@ -0,0 +1,208 @@ +"""API routes for Xtream Codes proxy (categories, M3U, XMLTV)""" +import json +import logging +import os +import re + +from flask import Blueprint, Response, current_app, jsonify, request + +from app.services import ( + fetch_api_data, + fetch_categories_and_channels, + generate_m3u_playlist, + validate_xtream_credentials, +) +from app.utils import encode_url, parse_group_list + +logger = logging.getLogger(__name__) + +api_bp = Blueprint('api', __name__) + + +def get_required_params(): + """Get and validate the required parameters from the request (supports both GET and POST)""" + # Handle both GET and POST requests + if request.method == "POST": + data = request.get_json() or {} + url = data.get("url") + username = data.get("username") + password = data.get("password") + proxy_url = data.get("proxy_url", current_app.config['DEFAULT_PROXY_URL']) or request.host_url.rstrip("/") + else: + url = request.args.get("url") + username = request.args.get("username") + password = request.args.get("password") + proxy_url = request.args.get("proxy_url", current_app.config['DEFAULT_PROXY_URL']) or request.host_url.rstrip("/") + + if not url or not username or not password: + return ( + None, + None, + None, + None, + jsonify({"error": "Missing Parameters", "details": "Required parameters: url, username, and password"}), + 400 + ) + + return url, username, password, proxy_url, None, None + + +@api_bp.route("/categories", methods=["GET"]) +def get_categories(): + """Get all available categories from the Xtream API""" + # Get and validate parameters + url, username, password, proxy_url, error, status_code = get_required_params() + if error: + return error, status_code + + # Check for VOD parameter - default to false to avoid timeouts (VOD is massive and slow!) + include_vod = request.args.get("include_vod", "false").lower() == "true" + logger.info(f"VOD content requested: {include_vod}") + + # Validate credentials + user_data, error_json, error_code = validate_xtream_credentials(url, username, password) + if error_json: + return error_json, error_code, {"Content-Type": "application/json"} + + # Fetch categories + categories, channels, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod) + if error_json: + return error_json, error_code, {"Content-Type": "application/json"} + + # Return categories as JSON + return json.dumps(categories), 200, {"Content-Type": "application/json"} + + +@api_bp.route("/xmltv", methods=["GET"]) +def generate_xmltv(): + """Generate a filtered XMLTV file from the Xtream API""" + # Get and validate parameters + url, username, password, proxy_url, error, status_code = get_required_params() + if error: + return error, status_code + + # No filtering supported for XMLTV endpoint + + # Validate credentials + user_data, error_json, error_code = validate_xtream_credentials(url, username, password) + if error_json: + return error_json, error_code, {"Content-Type": "application/json"} + + # Fetch XMLTV data + base_url = url.rstrip("/") + xmltv_url = f"{base_url}/xmltv.php?username={username}&password={password}" + xmltv_data = fetch_api_data(xmltv_url, timeout=20) # Longer timeout for XMLTV + + if isinstance(xmltv_data, tuple): # Error response + return json.dumps(xmltv_data[0]), xmltv_data[1], {"Content-Type": "application/json"} + + # If not proxying, return the original XMLTV + if not proxy_url: + return Response( + xmltv_data, mimetype="application/xml", headers={"Content-Disposition": "attachment; filename=guide.xml"} + ) + + # Replace image URLs in the XMLTV content with proxy URLs + def replace_icon_url(match): + original_url = match.group(1) + proxied_url = f"{proxy_url}/image-proxy/{encode_url(original_url)}" + return f' 10 else str(wanted_groups) + unwanted_display = f"{len(unwanted_groups)} groups" if len(unwanted_groups) > 10 else str(unwanted_groups) + logger.info(f"Filter parameters - wanted_groups: {wanted_display}, unwanted_groups: {unwanted_display}, include_vod: {include_vod}") + + # Warn about massive filter lists + total_filters = len(wanted_groups) + len(unwanted_groups) + if total_filters > 20: + logger.warning(f"⚠️ Large filter list detected ({total_filters} categories) - this will be slower!") + if total_filters > 50: + logger.warning(f"🐌 MASSIVE filter list ({total_filters} categories) - expect 3-5 minute processing time!") + + # Validate credentials + user_data, error_json, error_code = validate_xtream_credentials(url, username, password) + if error_json: + return error_json, error_code, {"Content-Type": "application/json"} + + # Fetch categories and channels + categories, streams, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod) + if error_json: + return error_json, error_code, {"Content-Type": "application/json"} + + # Extract user info and server URL + username = user_data["user_info"]["username"] + password = user_data["user_info"]["password"] + + server_url = f"http://{user_data['server_info']['url']}:{user_data['server_info']['port']}" + + # Generate M3U playlist + m3u_playlist = generate_m3u_playlist( + url=url, + username=username, + password=password, + server_url=server_url, + categories=categories, + streams=streams, + wanted_groups=wanted_groups, + unwanted_groups=unwanted_groups, + no_stream_proxy=no_stream_proxy, + include_vod=include_vod, + include_channel_id=include_channel_id, + channel_id_tag=channel_id_tag, + proxy_url=proxy_url + ) + + # Determine filename based on content included + filename = "FullPlaylist.m3u" if include_vod else "LiveStream.m3u" + + # Return the M3U playlist with proper CORS headers for frontend + headers = { + "Content-Disposition": f"attachment; filename={filename}", + "Access-Control-Allow-Origin": "*", + "Access-Control-Allow-Methods": "GET, POST, OPTIONS", + "Access-Control-Allow-Headers": "Content-Type" + } + + return Response(m3u_playlist, mimetype="audio/x-scpls", headers=headers) diff --git a/app/routes/proxy.py b/app/routes/proxy.py new file mode 100644 index 0000000..55409d9 --- /dev/null +++ b/app/routes/proxy.py @@ -0,0 +1,71 @@ +"""Proxy routes for images and streams""" +import logging +import urllib.parse + +import requests +from flask import Blueprint, Response + +from app.utils.streaming import generate_streaming_response, stream_request + +logger = logging.getLogger(__name__) + +proxy_bp = Blueprint('proxy', __name__) + + +@proxy_bp.route("/image-proxy/") +def proxy_image(image_url): + """Proxy endpoint for images to avoid CORS issues""" + try: + original_url = urllib.parse.unquote(image_url) + logger.info(f"Image proxy request for: {original_url}") + + response = requests.get(original_url, stream=True, timeout=10) + response.raise_for_status() + + content_type = response.headers.get("Content-Type", "") + + if not content_type.startswith("image/"): + logger.error(f"Invalid content type for image: {content_type}") + return Response("Invalid image type", status=415) + + return generate_streaming_response(response, content_type) + except requests.Timeout: + return Response("Image fetch timeout", status=504) + except requests.HTTPError as e: + return Response(f"Failed to fetch image: {str(e)}", status=e.response.status_code) + except Exception as e: + logger.error(f"Image proxy error: {str(e)}") + return Response("Failed to process image", status=500) + + +@proxy_bp.route("/stream-proxy/") +def proxy_stream(stream_url): + """Proxy endpoint for streams""" + try: + original_url = urllib.parse.unquote(stream_url) + logger.info(f"Stream proxy request for: {original_url}") + + response = stream_request(original_url, timeout=60) # Longer timeout for live streams + response.raise_for_status() + + # Determine content type + content_type = response.headers.get("Content-Type") + if not content_type: + if original_url.endswith(".ts"): + content_type = "video/MP2T" + elif original_url.endswith(".m3u8"): + content_type = "application/vnd.apple.mpegurl" + else: + content_type = "application/octet-stream" + + logger.info(f"Using content type: {content_type}") + return generate_streaming_response(response, content_type) + except requests.Timeout: + logger.error(f"Timeout connecting to stream: {original_url}") + return Response("Stream timeout", status=504) + except requests.HTTPError as e: + logger.error(f"HTTP error fetching stream: {e.response.status_code} - {original_url}") + return Response(f"Failed to fetch stream: {str(e)}", status=e.response.status_code) + except Exception as e: + logger.error(f"Stream proxy error: {str(e)} - {original_url}") + return Response("Failed to process stream", status=500) diff --git a/app/routes/static.py b/app/routes/static.py new file mode 100644 index 0000000..80d55f8 --- /dev/null +++ b/app/routes/static.py @@ -0,0 +1,45 @@ +"""Static file and frontend routes""" +import logging +import os + +from flask import Blueprint, send_from_directory + +logger = logging.getLogger(__name__) + +static_bp = Blueprint('static', __name__) + +# Get the base directory (project root) +BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) +FRONTEND_DIR = os.path.join(BASE_DIR, 'frontend') +ASSETS_DIR = os.path.join(BASE_DIR, 'docs', 'assets') + + +@static_bp.route("/") +def serve_frontend(): + """Serve the frontend index.html file""" + return send_from_directory(FRONTEND_DIR, "index.html") + + +@static_bp.route("/assets/") +def serve_assets(filename): + """Serve assets from the docs/assets directory""" + try: + return send_from_directory(ASSETS_DIR, filename) + except: + return "Asset not found", 404 + + +@static_bp.route("/") +def serve_static_files(filename): + """Serve static files from the frontend directory""" + # Don't serve API routes through static file handler + api_routes = ["m3u", "xmltv", "categories", "image-proxy", "stream-proxy", "assets"] + if filename.split("/")[0] in api_routes: + return "Not found", 404 + + # Only serve files that exist in the frontend directory + try: + return send_from_directory(FRONTEND_DIR, filename) + except: + # If file doesn't exist in frontend, return 404 + return "File not found", 404 diff --git a/app/services/__init__.py b/app/services/__init__.py new file mode 100644 index 0000000..13ea94c --- /dev/null +++ b/app/services/__init__.py @@ -0,0 +1,16 @@ +"""Services package""" +from .m3u_generator import generate_m3u_playlist +from .xtream_api import ( + fetch_api_data, + fetch_categories_and_channels, + fetch_series_episodes, + validate_xtream_credentials, +) + +__all__ = [ + 'fetch_api_data', + 'validate_xtream_credentials', + 'fetch_categories_and_channels', + 'fetch_series_episodes', + 'generate_m3u_playlist' +] diff --git a/app/services/m3u_generator.py b/app/services/m3u_generator.py new file mode 100644 index 0000000..5d4c59c --- /dev/null +++ b/app/services/m3u_generator.py @@ -0,0 +1,250 @@ +"""M3U playlist generation service""" +import logging +from concurrent.futures import ThreadPoolExecutor, as_completed + +from app.services.xtream_api import fetch_series_episodes +from app.utils import encode_url, group_matches + +logger = logging.getLogger(__name__) + + +def generate_m3u_playlist( + url, + username, + password, + server_url, + categories, + streams, + wanted_groups=None, + unwanted_groups=None, + no_stream_proxy=False, + include_vod=False, + include_channel_id=False, + channel_id_tag="channel-id", + proxy_url=None +): + """ + Generate an M3U playlist from Xtream API data + + Args: + url: Xtream API base URL + username: Xtream API username + password: Xtream API password + server_url: Server URL for streaming + categories: List of categories + streams: List of streams + wanted_groups: List of group patterns to include (optional) + unwanted_groups: List of group patterns to exclude (optional) + no_stream_proxy: Whether to disable stream proxying + include_vod: Whether VOD content is included + include_channel_id: Whether to include channel IDs + channel_id_tag: Tag name for channel IDs + proxy_url: Proxy URL for images and streams + + Returns: + M3U playlist string + """ + # Create category name lookup + category_names = {cat["category_id"]: cat["category_name"] for cat in categories} + + # Log all available groups + all_groups = set(category_names.values()) + logger.info(f"All available groups: {sorted(all_groups)}") + + # Generate M3U playlist + m3u_playlist = "#EXTM3U\n" + + # Track included groups + included_groups = set() + processed_streams = 0 + total_streams = len(streams) + + # Pre-compile filter patterns for massive filter lists (performance optimization) + wanted_patterns = [pattern.lower() for pattern in wanted_groups] if wanted_groups else [] + unwanted_patterns = [pattern.lower() for pattern in unwanted_groups] if unwanted_groups else [] + + logger.info(f"🔍 Starting to filter {total_streams} streams...") + batch_size = 10000 # Process streams in batches for better performance + + # Filter series to fetch episodes for (optimization to avoid fetching episodes for excluded series) + series_episodes_map = {} + if include_vod: + series_streams = [s for s in streams if s.get("content_type") == "series"] + if series_streams: + logger.info(f"Found {len(series_streams)} series. Filtering to determine which need episodes...") + + series_to_fetch = [] + for stream in series_streams: + # Get raw category name for filtering + category_name = category_names.get(stream.get('category_id'), 'Uncategorized') + + # Calculate group_title (prefixed) + group_title = f"Series - {category_name}" + + # Check filter against both raw category name and prefixed name + # This ensures we match "Action" (raw) AND "Series - Action" (prefixed) + should_fetch = True + if wanted_patterns: + should_fetch = any( + group_matches(category_name, w) or group_matches(group_title, w) + for w in wanted_groups + ) + elif unwanted_patterns: + should_fetch = not any( + group_matches(category_name, u) or group_matches(group_title, u) + for u in unwanted_groups + ) + + if should_fetch: + series_to_fetch.append(stream) + + if series_to_fetch: + logger.info(f"Fetching episodes for {len(series_to_fetch)} series (this might take a while)...") + + with ThreadPoolExecutor(max_workers=5) as executor: + future_to_series = { + executor.submit(fetch_series_episodes, url, username, password, s.get("series_id")): s.get("series_id") + for s in series_to_fetch + } + + completed_fetches = 0 + for future in as_completed(future_to_series): + s_id, episodes = future.result() + if episodes: + series_episodes_map[s_id] = episodes + + completed_fetches += 1 + if completed_fetches % 50 == 0: + logger.info(f" Fetched episodes for {completed_fetches}/{len(series_to_fetch)} series") + + logger.info(f"✅ Fetched episodes for {len(series_episodes_map)} series") + + for stream in streams: + content_type = stream.get("content_type", "live") + + # Get raw category name + category_name = category_names.get(stream.get("category_id"), "Uncategorized") + + # Determine group title based on content type + if content_type == "series": + # For series, use series name as group title + group_title = f"Series - {category_name}" + stream_name = stream.get("name", "Unknown Series") + else: + # For live and VOD content + group_title = category_name + stream_name = stream.get("name", "Unknown") + + # Add content type prefix for VOD + if content_type == "vod": + group_title = f"VOD - {category_name}" + + # Optimized filtering logic using pre-compiled patterns + include_stream = True + + if wanted_patterns: + # Only include streams from specified groups (optimized matching) + # Check both raw category name and final group title to support flexible filtering + include_stream = any( + group_matches(category_name, wanted_group) or group_matches(group_title, wanted_group) + for wanted_group in wanted_groups + ) + elif unwanted_patterns: + # Exclude streams from unwanted groups (optimized matching) + include_stream = not any( + group_matches(category_name, unwanted_group) or group_matches(group_title, unwanted_group) + for unwanted_group in unwanted_groups + ) + + processed_streams += 1 + + # Progress logging for large datasets + if processed_streams % batch_size == 0: + logger.info(f" 📊 Processed {processed_streams}/{total_streams} streams ({(processed_streams/total_streams)*100:.1f}%)") + + if include_stream: + included_groups.add(group_title) + + tags = [ + f'tvg-name="{stream_name}"', + f'group-title="{group_title}"', + ] + + # Handle logo URL - proxy only if stream proxying is enabled + original_logo = stream.get("stream_icon", "") + if original_logo and not no_stream_proxy: + logo_url = f"{proxy_url}/image-proxy/{encode_url(original_logo)}" + else: + logo_url = original_logo + tags.append(f'tvg-logo="{logo_url}"') + + # Handle channel id if enabled + if include_channel_id: + channel_id = stream.get("epg_channel_id") + if channel_id: + tags.append(f'{channel_id_tag}="{channel_id}"') + + # Create the stream URL based on content type + if content_type == "live": + # Live TV streams + stream_url = f"{server_url}/live/{username}/{password}/{stream['stream_id']}.ts" + elif content_type == "vod": + # VOD streams + stream_url = f"{server_url}/movie/{username}/{password}/{stream['stream_id']}.{stream.get('container_extension', 'mp4')}" + elif content_type == "series": + # Series streams - check if we have episodes + episodes_data = series_episodes_map.get(stream.get("series_id")) + + if episodes_data: + # Sort seasons numerically if possible + try: + sorted_seasons = sorted(episodes_data.items(), key=lambda x: int(x[0]) if str(x[0]).isdigit() else 999) + except: + sorted_seasons = episodes_data.items() + + for season_num, episodes in sorted_seasons: + for episode in episodes: + episode_id = episode.get("id") + episode_num = episode.get("episode_num") + episode_title = episode.get("title") + container_ext = episode.get("container_extension", "mp4") + + # Format title: Series Name - S01E01 - Episode Title + full_title = f"{stream_name} - S{str(season_num).zfill(2)}E{str(episode_num).zfill(2)} - {episode_title}" + + # Build stream URL for episode + ep_stream_url = f"{server_url}/series/{username}/{password}/{episode_id}.{container_ext}" + + # Apply stream proxying if enabled + if not no_stream_proxy: + ep_stream_url = f"{proxy_url}/stream-proxy/{encode_url(ep_stream_url)}" + + # Add to playlist + m3u_playlist += ( + f'#EXTINF:0 {" ".join(tags)},{full_title}\n' + ) + m3u_playlist += f"{ep_stream_url}\n" + + # Continue to next stream as we've added all episodes + continue + else: + # Fallback for series without episode data + series_id = stream.get("series_id", stream.get("stream_id", "")) + stream_url = f"{server_url}/series/{username}/{password}/{series_id}.mp4" + + # Apply stream proxying if enabled (for non-series, or series fallback) + if not no_stream_proxy: + stream_url = f"{proxy_url}/stream-proxy/{encode_url(stream_url)}" + + # Add stream to playlist + m3u_playlist += ( + f'#EXTINF:0 {" ".join(tags)},{stream_name}\n' + ) + m3u_playlist += f"{stream_url}\n" + + # Log included groups after filtering + logger.info(f"Groups included after filtering: {sorted(included_groups)}") + logger.info(f"Groups excluded after filtering: {sorted(all_groups - included_groups)}") + logger.info(f"✅ M3U generation complete! Generated playlist with {len(included_groups)} groups") + + return m3u_playlist diff --git a/app/services/xtream_api.py b/app/services/xtream_api.py new file mode 100644 index 0000000..a4abf11 --- /dev/null +++ b/app/services/xtream_api.py @@ -0,0 +1,281 @@ +"""Xtream Codes API client service""" +import json +import logging +import time +import urllib.parse +from concurrent.futures import ThreadPoolExecutor, as_completed + +import requests +from fake_useragent import UserAgent +from flask import request + +logger = logging.getLogger(__name__) + + +def fetch_api_data(url, timeout=10): + """Make a request to an API endpoint""" + ua = UserAgent() + headers = { + "User-Agent": ua.chrome, + "Accept": "application/json,text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", + "Accept-Language": "en-US,en;q=0.5", + "Connection": "close", + "Accept-Encoding": "gzip, deflate", + } + + try: + hostname = urllib.parse.urlparse(url).netloc.split(":")[0] + logger.debug(f"Making request to host: {hostname}") + + # Use fresh connection for each request to avoid stale connection issues + response = requests.get(url, headers=headers, timeout=timeout, stream=True) + response.raise_for_status() + + # For large responses, use streaming JSON parsing + try: + # Check content length to decide parsing strategy + content_length = response.headers.get('Content-Length') + if content_length and int(content_length) > 10_000_000: # > 10MB + logger.info(f"Large response detected ({content_length} bytes), using optimized parsing") + + # Stream the JSON content for better memory efficiency + response.encoding = 'utf-8' # Ensure proper encoding + return response.json() + except json.JSONDecodeError: + # Fallback to text for non-JSON responses + return response.text + + except requests.exceptions.SSLError: + return {"error": "SSL Error", "details": "Failed to verify SSL certificate"}, 503 + except requests.exceptions.RequestException as e: + logger.error(f"RequestException: {e}") + return {"error": "Request Exception", "details": str(e)}, 503 + + +def validate_xtream_credentials(url, username, password): + """Validate the Xtream API credentials""" + api_url = f"{url}/player_api.php?username={username}&password={password}" + data = fetch_api_data(api_url) + + if isinstance(data, tuple): # Error response + return None, data[0], data[1] + + if "user_info" not in data or "server_info" not in data: + return ( + None, + json.dumps( + { + "error": "Invalid Response", + "details": "Server response missing required data (user_info or server_info)", + } + ), + 400, + ) + + return data, None, None + + +def fetch_api_endpoint(url_info): + """Fetch a single API endpoint - used for concurrent requests""" + url, name, timeout = url_info + try: + logger.info(f"🚀 Fetching {name}...") + start_time = time.time() + data = fetch_api_data(url, timeout=timeout) + end_time = time.time() + + if isinstance(data, list): + logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s - got {len(data)} items") + else: + logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s") + return name, data + except Exception as e: + logger.warning(f"❌ Failed to fetch {name}: {e}") + return name, None + + +def fetch_series_episodes(url, username, password, series_id): + """Fetch episodes for a specific series""" + api_url = f"{url}/player_api.php?username={username}&password={password}&action=get_series_info&series_id={series_id}" + start_time = time.time() + try: + # Use a shorter timeout for individual series as we might fetch many + data = fetch_api_data(api_url, timeout=20) + + # Check if we got a valid response with episodes + # The API returns 'episodes' as a dict {season_num: [episodes]} + if isinstance(data, dict) and "episodes" in data and data["episodes"]: + logger.info(f"✅ Fetched episodes for series {series_id} in {time.time() - start_time:.1f}s") + return series_id, data["episodes"] + else: + logger.error(f"No episodes found for series {series_id}") + return series_id, None + except Exception as e: + logger.error(f"Failed to fetch episodes for series {series_id} in {time.time() - start_time:.1f}s: {e}") + return series_id, None + + +def fetch_categories_and_channels(url, username, password, include_vod=False): + """Fetch categories and channels from the Xtream API using concurrent requests""" + all_categories = [] + all_streams = [] + + try: + # Prepare all API endpoints to fetch concurrently + api_endpoints = [ + (f"{url}/player_api.php?username={username}&password={password}&action=get_live_categories", + "live_categories", 60), + (f"{url}/player_api.php?username={username}&password={password}&action=get_live_streams", + "live_streams", 180), + ] + + # Add VOD endpoints if requested (WARNING: This will be much slower!) + if include_vod: + logger.warning("⚠️ Including VOD content - this will take significantly longer!") + logger.info("💡 For faster loading, use the API without include_vod=true") + + # Only add the most essential VOD endpoints - skip the massive streams for categories-only requests + api_endpoints.extend([ + (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_categories", + "vod_categories", 60), + (f"{url}/player_api.php?username={username}&password={password}&action=get_series_categories", + "series_categories", 60), + ]) + + # Only fetch the massive stream lists if explicitly needed for M3U generation + vod_for_m3u = request.endpoint == 'api.generate_m3u' + if vod_for_m3u: + logger.warning("🐌 Fetching massive VOD/Series streams for M3U generation...") + api_endpoints.extend([ + (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_streams", + "vod_streams", 240), + (f"{url}/player_api.php?username={username}&password={password}&action=get_series", + "series", 240), + ]) + else: + logger.info("⚡ Skipping massive VOD streams for categories-only request") + + # Fetch all endpoints concurrently using ThreadPoolExecutor + logger.info(f"Starting concurrent fetch of {len(api_endpoints)} API endpoints...") + results = {} + + with ThreadPoolExecutor(max_workers=10) as executor: # Increased workers for better concurrency + # Submit all API calls + future_to_name = {executor.submit(fetch_api_endpoint, endpoint): endpoint[1] + for endpoint in api_endpoints} + + # Collect results as they complete + for future in as_completed(future_to_name): + name, data = future.result() + results[name] = data + + logger.info("All concurrent API calls completed!") + + # Process live categories and streams (required) + live_categories = results.get("live_categories") + live_streams = results.get("live_streams") + + if isinstance(live_categories, tuple): # Error response + return None, None, live_categories[0], live_categories[1] + if isinstance(live_streams, tuple): # Error response + return None, None, live_streams[0], live_streams[1] + + if not isinstance(live_categories, list) or not isinstance(live_streams, list): + return ( + None, + None, + json.dumps( + { + "error": "Invalid Data Format", + "details": "Live categories or streams data is not in the expected format", + } + ), + 500, + ) + + # Optimized data processing - batch operations for massive datasets + logger.info("Processing live content...") + + # Batch set content_type for live content + if live_categories: + for category in live_categories: + category["content_type"] = "live" + all_categories.extend(live_categories) + + if live_streams: + for stream in live_streams: + stream["content_type"] = "live" + all_streams.extend(live_streams) + + logger.info(f"✅ Added {len(live_categories)} live categories and {len(live_streams)} live streams") + + # Process VOD content if requested and available + if include_vod: + logger.info("Processing VOD content...") + + # Process VOD categories + vod_categories = results.get("vod_categories") + if isinstance(vod_categories, list) and vod_categories: + for category in vod_categories: + category["content_type"] = "vod" + all_categories.extend(vod_categories) + logger.info(f"✅ Added {len(vod_categories)} VOD categories") + + # Process series categories first (lightweight) + series_categories = results.get("series_categories") + if isinstance(series_categories, list) and series_categories: + for category in series_categories: + category["content_type"] = "series" + all_categories.extend(series_categories) + logger.info(f"✅ Added {len(series_categories)} series categories") + + # Only process massive stream lists if they were actually fetched + vod_streams = results.get("vod_streams") + if isinstance(vod_streams, list) and vod_streams: + logger.info(f"🔥 Processing {len(vod_streams)} VOD streams (this is the slow part)...") + + # Batch process for better performance + batch_size = 5000 + for i in range(0, len(vod_streams), batch_size): + batch = vod_streams[i:i + batch_size] + for stream in batch: + stream["content_type"] = "vod" + if i + batch_size < len(vod_streams): + logger.info(f" Processed {i + batch_size}/{len(vod_streams)} VOD streams...") + + all_streams.extend(vod_streams) + logger.info(f"✅ Added {len(vod_streams)} VOD streams") + + # Process series (this can also be huge!) + series = results.get("series") + if isinstance(series, list) and series: + logger.info(f"🔥 Processing {len(series)} series (this is also slow)...") + + # Batch process for better performance + batch_size = 5000 + for i in range(0, len(series), batch_size): + batch = series[i:i + batch_size] + for show in batch: + show["content_type"] = "series" + if i + batch_size < len(series): + logger.info(f" Processed {i + batch_size}/{len(series)} series...") + + all_streams.extend(series) + logger.info(f"✅ Added {len(series)} series") + + except Exception as e: + logger.error(f"Critical error fetching API data: {e}") + return ( + None, + None, + json.dumps( + { + "error": "API Fetch Error", + "details": f"Failed to fetch data from IPTV service: {str(e)}", + } + ), + 500, + ) + + logger.info(f"🚀 CONCURRENT FETCH COMPLETE: {len(all_categories)} total categories and {len(all_streams)} total streams") + return all_categories, all_streams, None, None diff --git a/app/utils/__init__.py b/app/utils/__init__.py new file mode 100644 index 0000000..09d47b4 --- /dev/null +++ b/app/utils/__init__.py @@ -0,0 +1,12 @@ +"""Utility functions package""" +from .helpers import encode_url, group_matches, parse_group_list, setup_custom_dns +from .streaming import generate_streaming_response, stream_request + +__all__ = [ + 'setup_custom_dns', + 'encode_url', + 'parse_group_list', + 'group_matches', + 'stream_request', + 'generate_streaming_response' +] diff --git a/app/utils/helpers.py b/app/utils/helpers.py new file mode 100644 index 0000000..f12de3e --- /dev/null +++ b/app/utils/helpers.py @@ -0,0 +1,93 @@ +"""Utility functions for URL encoding, filtering, and DNS setup""" +import fnmatch +import ipaddress +import logging +import socket +import urllib.parse + +import dns.resolver + +logger = logging.getLogger(__name__) + + +def setup_custom_dns(): + """Configure a custom DNS resolver using reliable DNS services""" + dns_servers = ["1.1.1.1", "1.0.0.1", "8.8.8.8", "8.8.4.4", "9.9.9.9"] + + custom_resolver = dns.resolver.Resolver() + custom_resolver.nameservers = dns_servers + + original_getaddrinfo = socket.getaddrinfo + + def new_getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): + if host: + try: + # Skip DNS resolution for IP addresses + try: + ipaddress.ip_address(host) + # If we get here, the host is already an IP address + logger.debug(f"Host is already an IP address: {host}, skipping DNS resolution") + except ValueError: + # Not an IP address, so try system DNS first + try: + result = original_getaddrinfo(host, port, family, type, proto, flags) + logger.debug(f"System DNS resolved {host}") + return result + except Exception as system_error: + logger.info(f"System DNS resolution failed for {host}: {system_error}, falling back to custom DNS") + # Fall back to custom DNS + answers = custom_resolver.resolve(host) + host = str(answers[0]) + logger.debug(f"Custom DNS resolved {host}") + except Exception as e: + logger.info(f"Custom DNS resolution also failed for {host}: {e}, using original getaddrinfo") + return original_getaddrinfo(host, port, family, type, proto, flags) + + socket.getaddrinfo = new_getaddrinfo + logger.info("Custom DNS resolver set up") + + +def encode_url(url): + """Safely encode a URL for use in proxy endpoints""" + return urllib.parse.quote(url, safe="") if url else "" + + +def parse_group_list(group_string): + """Parse a comma-separated string into a list of trimmed strings""" + return [group.strip() for group in group_string.split(",")] if group_string else [] + + +def group_matches(group_title, pattern): + """Check if a group title matches a pattern, supporting wildcards and exact matching""" + # Convert to lowercase for case-insensitive matching + group_lower = group_title.lower() + pattern_lower = pattern.lower() + + # Handle spaces in pattern + if " " in pattern_lower: + # For patterns with spaces, split and check each part + pattern_parts = pattern_lower.split() + group_parts = group_lower.split() + + # If pattern has more parts than group, can't match + if len(pattern_parts) > len(group_parts): + return False + + # Check each part of the pattern against group parts + for i, part in enumerate(pattern_parts): + if i >= len(group_parts): + return False + if "*" in part or "?" in part: + if not fnmatch.fnmatch(group_parts[i], part): + return False + else: + if part not in group_parts[i]: + return False + return True + + # Check for wildcard patterns + if "*" in pattern_lower or "?" in pattern_lower: + return fnmatch.fnmatch(group_lower, pattern_lower) + else: + # Simple substring match for non-wildcard patterns + return pattern_lower in group_lower diff --git a/app/utils/streaming.py b/app/utils/streaming.py new file mode 100644 index 0000000..ae971c5 --- /dev/null +++ b/app/utils/streaming.py @@ -0,0 +1,65 @@ +"""Streaming and proxy utilities""" +import logging + +import requests +from flask import Response + +logger = logging.getLogger(__name__) + + +def stream_request(url, headers=None, timeout=30): + """Make a streaming request that doesn't buffer the full response""" + if not headers: + headers = { + "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36", + "Connection": "keep-alive", + } + + # Use longer timeout for streams and set both connect and read timeouts + return requests.get(url, stream=True, headers=headers, timeout=(10, timeout)) + + +def generate_streaming_response(response, content_type=None): + """Generate a streaming response with appropriate headers""" + if not content_type: + content_type = response.headers.get("Content-Type", "application/octet-stream") + + def generate(): + try: + bytes_sent = 0 + for chunk in response.iter_content(chunk_size=8192): + if chunk: + bytes_sent += len(chunk) + yield chunk + logger.info(f"Stream completed, sent {bytes_sent} bytes") + except requests.exceptions.ChunkedEncodingError as e: + # Chunked encoding error from upstream - log and stop gracefully + logger.warning(f"Upstream chunked encoding error after {bytes_sent} bytes: {str(e)}") + # Don't raise - just stop yielding to close stream gracefully + except requests.exceptions.ConnectionError as e: + # Connection error (reset, timeout, etc.) - log and stop gracefully + logger.warning(f"Connection error after {bytes_sent} bytes: {str(e)}") + # Don't raise - just stop yielding to close stream gracefully + except Exception as e: + logger.error(f"Streaming error after {bytes_sent} bytes: {str(e)}") + # Don't raise exceptions in generators after headers are sent! + # Raising here causes Flask to inject "HTTP/1.1 500" into the chunked body, + finally: + # Always close the upstream response to free resources + try: + response.close() + except: + pass + + headers = { + "Access-Control-Allow-Origin": "*", + "Content-Type": content_type, + } + + # Add content length if available and not using chunked transfer + if "Content-Length" in response.headers and "Transfer-Encoding" not in response.headers: + headers["Content-Length"] = response.headers["Content-Length"] + else: + headers["Transfer-Encoding"] = "chunked" + + return Response(generate(), mimetype=content_type, headers=headers, direct_passthrough=True) diff --git a/frontend/index.html b/frontend/index.html index a64fc1c..57e0fe6 100644 --- a/frontend/index.html +++ b/frontend/index.html @@ -4,168 +4,163 @@ - xtream2m3u - M3U Playlist Generator + xtream2m3u - Playlist Generator + +

-
+ +

xtream2m3u

-

Convert Xtream IPTV APIs into customizable M3U playlists

-
+

Generate custom M3U playlists from your Xtream IPTV subscription.

+ -
+
- 🔐 Xtream API Credentials + 🔐 Service Credentials
-
-
🔒
-
- Privacy Notice: Your credentials are only used to connect to your IPTV - service and are never saved or stored on our servers. +
+
+ +
-
-
- - -
- -
- - -
- -
- - -
- -
-
- +
+ +
-
- +
+ + +
+ +
+
+ +
+
+ +
+ +
+
-
+
-

Loading categories...

+

Connecting to service...

-
+
- 📁 Select Categories + 📁 Customize Playlist
-
- - +
+
+ + +
+
- Click categories to select them (or leave empty to include all) + Select categories to include in your playlist +
+ + +
- +
- - +
-
+
- -
+ +
-

Playlist Generated!

-

Your M3U playlist has been successfully created and is ready for - download.

+

Playlist Ready!

+

Your custom M3U playlist has been generated successfully.

+
- + 📥 Download .m3u -
-
+ -
+
- \ No newline at end of file + diff --git a/frontend/script.js b/frontend/script.js index 45249f0..7c71f0b 100644 --- a/frontend/script.js +++ b/frontend/script.js @@ -1,486 +1,483 @@ -let categories = []; -let currentStep = 1; - -async function loadCategories() { - const url = document.getElementById("url").value.trim(); - const username = document.getElementById("username").value.trim(); - const password = document.getElementById("password").value.trim(); - const includeVod = document.getElementById("includeVod").checked; - - if (!url || !username || !password) { - showError("Please fill in all required fields"); - return; - } - - const loadingElement = document.getElementById("loading"); - const loadButton = document.getElementById("loadCategoriesText"); - - loadButton.textContent = "Loading..."; - loadingElement.style.display = "block"; - hideAllSteps(); - clearResults(); - - try { - const params = new URLSearchParams({ - url: url, - username: username, - password: password, - }); - - if (includeVod) { - params.append("include_vod", "true"); +// State Management +let state = { + categories: [], + currentStep: 1, + filterMode: 'include', + selectedCategories: new Set(), + collapsedSections: new Set(), + searchTerm: '', + credentials: { + url: '', + username: '', + password: '', + includeVod: false } +}; - const response = await fetch(`/categories?${params}`); - const data = await response.json(); - - if (!response.ok) { - throw new Error( - data.details || data.error || "Failed to load categories" - ); - } - - categories = data; - displayCategoryChips(categories); - showStep(2); - } catch (error) { - console.error("Error loading categories:", error); - showError(`Failed to load categories: ${error.message}`); - showStep(1); - } finally { - loadingElement.style.display = "none"; - loadButton.textContent = "Continue to Category Selection"; - } -} - -function displayCategoryChips(categories) { - const categoryChips = document.getElementById("categoryChips"); - categoryChips.innerHTML = ""; - - // Group categories by content type - const groupedCategories = { - live: [], - vod: [], - series: [], - }; - - categories.forEach((category) => { - const contentType = category.content_type || "live"; - if (groupedCategories[contentType]) { - groupedCategories[contentType].push(category); - } - }); - - // Define section headers and order - const sections = [ - { key: "live", title: "📺 Live TV", icon: "📺" }, - { key: "vod", title: "🎬 Movies & VOD", icon: "🎬" }, - { key: "series", title: "📺 TV Shows & Series", icon: "📺" }, - ]; - - sections.forEach((section) => { - const sectionCategories = groupedCategories[section.key]; - if (sectionCategories && sectionCategories.length > 0) { - // Create section header - const sectionHeader = document.createElement("div"); - sectionHeader.className = "category-section-header"; - sectionHeader.innerHTML = ` -

${section.title}

-
- - ${sectionCategories.length} categories -
- `; - categoryChips.appendChild(sectionHeader); - - // Create section container - const sectionContainer = document.createElement("div"); - sectionContainer.className = "category-section"; - - sectionCategories.forEach((category) => { - const chip = document.createElement("div"); - chip.className = "category-chip"; - chip.dataset.categoryId = category.category_id; - chip.dataset.categoryName = category.category_name; - chip.dataset.contentType = category.content_type || "live"; - chip.onclick = () => toggleChip(chip); - - chip.innerHTML = `${category.category_name}`; - sectionContainer.appendChild(chip); - }); - - categoryChips.appendChild(sectionContainer); - } - }); - - // Add event listeners for section select all buttons - document.querySelectorAll(".btn-section-select-all").forEach((button) => { - button.addEventListener("click", (e) => { - e.stopPropagation(); - const section = e.target.dataset.section; - const sectionChips = document.querySelectorAll( - `[data-content-type="${section}"]` - ); - const allSelected = Array.from(sectionChips).every((chip) => - chip.classList.contains("selected") - ); - - // Toggle all chips in this section - sectionChips.forEach((chip) => { - if (allSelected) { - chip.classList.remove("selected"); - } else { - chip.classList.add("selected"); - } - }); - - // Update button text - e.target.textContent = allSelected ? "Select All" : "Clear All"; - updateSelectionCounter(); - }); - }); - - updateSelectionCounter(); -} - -function toggleChip(chip) { - chip.classList.toggle("selected"); - updateSelectionCounter(); -} - -function updateSelectionCounter() { - const selectedChips = document.querySelectorAll(".category-chip.selected"); - const selectedCount = selectedChips.length; - const counter = document.getElementById("selectionCounter"); - const text = document.getElementById("selectionText"); - - if (selectedCount === 0) { - text.textContent = - "Click categories to select them (or leave empty to include all)"; - counter.classList.remove("has-selection"); - } else { - const filterMode = document.querySelector( - 'input[name="filterMode"]:checked' - ).value; - const action = filterMode === "include" ? "included" : "excluded"; - - // Count by content type - const contentTypeCounts = { live: 0, vod: 0, series: 0 }; - selectedChips.forEach((chip) => { - const contentType = chip.dataset.contentType || "live"; - if (contentTypeCounts.hasOwnProperty(contentType)) { - contentTypeCounts[contentType]++; - } - }); - - // Build detailed text with method info - const parts = []; - if (contentTypeCounts.live > 0) - parts.push(`${contentTypeCounts.live} Live TV`); - if (contentTypeCounts.vod > 0) - parts.push(`${contentTypeCounts.vod} Movies/VOD`); - if (contentTypeCounts.series > 0) - parts.push(`${contentTypeCounts.series} TV Shows`); - - const breakdown = parts.length > 0 ? ` (${parts.join(", ")})` : ""; - const methodInfo = selectedCount > 10 ? " • Using POST method for large request" : ""; - const timeEstimate = selectedCount > 20 ? " • Est. 2-4 min" : selectedCount > 10 ? " • Est. 1-2 min" : ""; - - text.textContent = `${selectedCount} categories will be ${action}${breakdown}${methodInfo}${timeEstimate}`; - counter.classList.add("has-selection"); - } -} - -function showConfirmation() { - const selectedCategories = getSelectedCategories(); - const filterMode = document.querySelector( - 'input[name="filterMode"]:checked' - ).value; - const includeVod = document.getElementById("includeVod").checked; - const modal = document.getElementById("confirmationModal"); - const summary = document.getElementById("modalSummary"); - - const url = document.getElementById("url").value.trim(); - const username = document.getElementById("username").value.trim(); - - let categoryText; - if (selectedCategories.length === 0) { - categoryText = `All ${categories.length} categories`; - } else { - const action = filterMode === "include" ? "Include" : "Exclude"; - categoryText = `${action} ${selectedCategories.length} selected categories`; - } - - const contentType = includeVod - ? "Live TV + VOD/Movies/Shows" - : "Live TV only"; - - summary.innerHTML = ` -
- Service URL: - ${url} -
-
- Username: - ${username} -
-
- Content Type: - ${contentType} -
-
- Filter Mode: - ${categoryText} -
-
- Total Categories: - ${categories.length} -
- `; - - modal.classList.add("active"); -} - -function closeModal() { - document.getElementById("confirmationModal").classList.remove("active"); -} - -async function confirmGeneration() { - closeModal(); - - const url = document.getElementById("url").value.trim(); - const username = document.getElementById("username").value.trim(); - const password = document.getElementById("password").value.trim(); - const includeVod = document.getElementById("includeVod").checked; - const selectedCategories = getSelectedCategories(); - const filterMode = document.querySelector( - 'input[name="filterMode"]:checked' - ).value; - - hideAllSteps(); - document.getElementById("loading").style.display = "block"; - document.querySelector("#loading p").textContent = - "Generating your playlist..."; - - try { - // Build request data - const requestData = { - url: url, - username: username, - password: password, - nostreamproxy: "true", - }; - - if (includeVod) { - requestData.include_vod = "true"; - } - - if (selectedCategories.length > 0) { - if (filterMode === "include") { - requestData.wanted_groups = selectedCategories.join(","); - } else { - requestData.unwanted_groups = selectedCategories.join(","); - } - } - - // Use POST for large filter lists to avoid URL length limits - const shouldUsePost = selectedCategories.length > 10 || - JSON.stringify(requestData).length > 2000; - - console.log(`Using ${shouldUsePost ? 'POST' : 'GET'} method for ${selectedCategories.length} categories`); - - let response; - if (shouldUsePost) { - // Show better progress message for large requests - document.querySelector("#loading p").textContent = - `Processing ${selectedCategories.length} categories - this may take 2-4 minutes...`; - - response = await fetch("/m3u", { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify(requestData) - }); - } else { - // Use GET for small requests - const params = new URLSearchParams(); - for (const [key, value] of Object.entries(requestData)) { - params.append(key, value); - } - response = await fetch(`/m3u?${params}`); - } - - if (!response.ok) { - const errorText = await response.text(); - throw new Error(errorText || "Failed to generate M3U playlist"); - } - - const blob = await response.blob(); - const downloadUrl = window.URL.createObjectURL(blob); - const downloadLink = document.getElementById("finalDownloadLink"); - downloadLink.href = downloadUrl; - downloadLink.download = "playlist.m3u"; - downloadLink.style.display = "inline-flex"; - - showStep(3); - } catch (error) { - console.error("Error generating M3U:", error); - showError(`Failed to generate M3U: ${error.message}`); - showStep(2); - } finally { - document.getElementById("loading").style.display = "none"; - document.querySelector("#loading p").textContent = "Loading categories..."; - } -} - -function getSelectedCategories() { - const selectedChips = document.querySelectorAll(".category-chip.selected"); - return Array.from(selectedChips).map((chip) => chip.dataset.categoryName); -} - -function clearSelection() { - const chips = document.querySelectorAll(".category-chip"); - chips.forEach((chip) => chip.classList.remove("selected")); - - // Reset section select all buttons - const selectAllButtons = document.querySelectorAll(".btn-section-select-all"); - selectAllButtons.forEach((button) => { - button.textContent = "Select All"; - }); - - updateSelectionCounter(); -} - -// Flow management functions -function hideAllSteps() { - document.querySelectorAll(".step").forEach((step) => { - step.classList.remove("active"); - }); -} +// DOM Elements +const elements = { + steps: { + 1: document.getElementById('step1'), + 2: document.getElementById('step2'), + 3: document.getElementById('step3') + }, + loading: document.getElementById('loading'), + loadingText: document.getElementById('loadingText'), + categoryChips: document.getElementById('categoryChips'), + selectionCounter: document.getElementById('selectionCounter'), + selectionText: document.getElementById('selectionText'), + confirmationModal: document.getElementById('confirmationModal'), + modalSummary: document.getElementById('modalSummary'), + results: document.getElementById('results'), + downloadLink: document.getElementById('finalDownloadLink'), + searchInput: document.getElementById('categorySearch') +}; +// Step Navigation function showStep(stepNumber) { - hideAllSteps(); - document.getElementById(`step${stepNumber}`).classList.add("active"); - currentStep = stepNumber; + // Hide all steps + Object.values(elements.steps).forEach(step => step.classList.remove('active')); + // Show target step + elements.steps[stepNumber].classList.add('active'); + state.currentStep = stepNumber; + window.scrollTo({ top: 0, behavior: 'smooth' }); } function goBackToStep1() { - showStep(1); + showStep(1); } -function startOver() { - // Clear all form data - document.getElementById("url").value = ""; - document.getElementById("username").value = ""; - document.getElementById("password").value = ""; - document.getElementById("includeVod").checked = false; - - // Reset categories and chips - categories = []; - document.getElementById("categoryChips").innerHTML = ""; - - // Clear any download link - const downloadLink = document.getElementById("finalDownloadLink"); - if (downloadLink.href && downloadLink.href.startsWith("blob:")) { - URL.revokeObjectURL(downloadLink.href); - } - downloadLink.style.display = "none"; - - clearResults(); - showStep(1); +function showLoading(message = 'Loading...') { + // Hide all steps + Object.values(elements.steps).forEach(step => step.classList.remove('active')); + elements.loading.style.display = 'block'; + elements.loadingText.textContent = message; } -function useOtherCredentials() { - // Keep categories but clear credentials - document.getElementById("url").value = ""; - document.getElementById("username").value = ""; - document.getElementById("password").value = ""; - - clearResults(); - showStep(1); +function hideLoading() { + elements.loading.style.display = 'none'; } function showError(message) { - const resultsDiv = document.getElementById("results"); - resultsDiv.innerHTML = `
⚠️ ${message}
`; + elements.results.innerHTML = ` +
+ ⚠️ ${message} +
+ `; + setTimeout(() => { + elements.results.innerHTML = ''; + }, 5000); } -function showSuccess(message) { - const resultsDiv = document.getElementById("results"); - resultsDiv.innerHTML = `
✅ ${message}
`; +// Data Fetching +async function loadCategories() { + const url = document.getElementById('url').value.trim(); + const username = document.getElementById('username').value.trim(); + const password = document.getElementById('password').value.trim(); + const includeVod = document.getElementById('includeVod').checked; + + if (!url || !username || !password) { + showError('Please fill in all required fields'); + return; + } + + // Update state + state.credentials = { url, username, password, includeVod }; + + showLoading('Connecting to IPTV service...'); + document.getElementById('loadBtn').disabled = true; + + try { + const params = new URLSearchParams({ + url, username, password, + include_vod: includeVod + }); + + const response = await fetch(`/categories?${params}`); + const data = await response.json(); + + if (!response.ok) { + throw new Error(data.details || data.error || 'Failed to fetch categories'); + } + + state.categories = data; + state.searchTerm = ''; + elements.searchInput.value = ''; + renderCategories(); + showStep(2); + + } catch (error) { + console.error('Error:', error); + showError(error.message); + showStep(1); + } finally { + hideLoading(); + document.getElementById('loadBtn').disabled = false; + } } -function clearResults() { - document.getElementById("results").innerHTML = ""; -} +// Category Rendering +function renderCategories() { + elements.categoryChips.innerHTML = ''; + // Preserve selection if just re-rendering, but currently we usually re-fetch on Step 1 -> 2. + // If we want to support search without re-rendering everything, we can just hide elements. + // But initially, we render all. -// Trim input fields on blur to prevent extra spaces -function setupInputTrimming() { - const textInputs = document.querySelectorAll( - 'input[type="text"], input[type="url"], input[type="password"]' - ); - textInputs.forEach((input) => { - input.addEventListener("blur", function () { - this.value = this.value.trim(); + // Group categories + const groups = { + live: [], + vod: [], + series: [] + }; + + state.categories.forEach(cat => { + const type = cat.content_type || 'live'; + if (groups[type]) groups[type].push(cat); }); - }); + + const sectionConfig = [ + { key: 'live', title: '📺 Live Channels' }, + { key: 'vod', title: '🎬 Movies' }, + { key: 'series', title: '🍿 TV Series' } + ]; + + sectionConfig.forEach(section => { + const cats = groups[section.key]; + if (cats && cats.length > 0) { + // Wrapper + const wrapper = document.createElement('div'); + wrapper.className = 'category-group-wrapper'; + wrapper.dataset.section = section.key; + + // Header + const header = document.createElement('div'); + header.className = 'category-section-header'; + if (state.collapsedSections.has(section.key)) { + header.classList.add('collapsed'); + } + header.dataset.section = section.key; + + // Header content + header.innerHTML = ` +

+ + ${section.title} + (${cats.length}) +

+ + `; + + // Click handler for collapsing + header.onclick = (e) => { + // Prevent collapsing when clicking the select all button + if (e.target.classList.contains('btn-section-select-all')) return; + toggleSection(section.key, header); + }; + + wrapper.appendChild(header); + + // Grid + const grid = document.createElement('div'); + grid.className = 'category-section'; + grid.dataset.section = section.key; + if (state.collapsedSections.has(section.key)) { + grid.classList.add('hidden'); + } + + cats.forEach(cat => { + const chip = document.createElement('div'); + chip.className = 'category-chip'; + if (state.selectedCategories.has(cat.category_name)) { + chip.classList.add('selected'); + } + chip.dataset.id = cat.category_id; + chip.dataset.name = cat.category_name; + chip.dataset.type = section.key; + chip.title = cat.category_name; + chip.textContent = cat.category_name; + + chip.onclick = () => toggleCategory(chip); + grid.appendChild(chip); + }); + + wrapper.appendChild(grid); + elements.categoryChips.appendChild(wrapper); + } + }); + + setupSectionToggles(); + updateCounter(); } -// Initialize input trimming when page loads -document.addEventListener("DOMContentLoaded", setupInputTrimming); - -// Update filter mode selection counter -document.addEventListener("change", function (e) { - if (e.target.name === "filterMode") { - updateSelectionCounter(); - } -}); - -// Modal click outside to close -document - .getElementById("confirmationModal") - .addEventListener("click", function (e) { - if (e.target === this) { - closeModal(); +function toggleCategory(chip) { + const name = chip.dataset.name; + if (state.selectedCategories.has(name)) { + state.selectedCategories.delete(name); + chip.classList.remove('selected'); + } else { + state.selectedCategories.add(name); + chip.classList.add('selected'); } - }); + updateCounter(); +} -// Keyboard shortcuts -document.addEventListener("keydown", function (e) { - // Escape to close modal - if (e.key === "Escape") { - closeModal(); - return; - } - - if (e.ctrlKey || e.metaKey) { - switch (e.key) { - case "Enter": - e.preventDefault(); - if (currentStep === 1) { - loadCategories(); - } else if (currentStep === 2) { - showConfirmation(); +function toggleSection(sectionKey, headerElement) { + const grid = document.querySelector(`.category-section[data-section="${sectionKey}"]`); + if (grid) { + if (grid.classList.contains('hidden')) { + grid.classList.remove('hidden'); + headerElement.classList.remove('collapsed'); + state.collapsedSections.delete(sectionKey); + } else { + grid.classList.add('hidden'); + headerElement.classList.add('collapsed'); + state.collapsedSections.add(sectionKey); } - break; - case "a": - e.preventDefault(); - if (currentStep === 2) { - const chips = document.querySelectorAll(".category-chip"); - const allSelected = Array.from(chips).every((chip) => - chip.classList.contains("selected") - ); - chips.forEach((chip) => { - if (allSelected) { - chip.classList.remove("selected"); - } else { - chip.classList.add("selected"); + } +} + +function setupSectionToggles() { + document.querySelectorAll('.btn-section-select-all').forEach(btn => { + btn.onclick = (e) => { + e.stopPropagation(); // Prevent header collapse + const section = e.target.dataset.section; + // Get visible chips only if we want to respect search? + // Usually "Select All" in a section implies all in that section, + // but if search is active, maybe only visible ones. + // Let's make it select all visible ones in that section. + + const chips = document.querySelectorAll(`.category-chip[data-type="${section}"]:not(.hidden)`); + if (chips.length === 0) return; + + const allSelected = Array.from(chips).every(c => state.selectedCategories.has(c.dataset.name)); + + chips.forEach(chip => { + const name = chip.dataset.name; + if (allSelected) { + state.selectedCategories.delete(name); + chip.classList.remove('selected'); + } else { + state.selectedCategories.add(name); + chip.classList.add('selected'); + } + }); + updateCounter(); + }; + }); +} + +function clearSelection() { + state.selectedCategories.clear(); + document.querySelectorAll('.category-chip').forEach(c => c.classList.remove('selected')); + updateCounter(); +} + +function selectAllVisible() { + const chips = document.querySelectorAll('.category-chip:not(.hidden)'); + chips.forEach(chip => { + state.selectedCategories.add(chip.dataset.name); + chip.classList.add('selected'); + }); + updateCounter(); +} + +function updateCounter() { + const count = state.selectedCategories.size; + const mode = document.querySelector('input[name="filterMode"]:checked').value; + state.filterMode = mode; + + if (count === 0) { + elements.selectionText.textContent = 'Select categories to include in your playlist'; + elements.selectionCounter.classList.remove('has-selection'); + } else { + const action = mode === 'include' ? 'included' : 'excluded'; + elements.selectionText.innerHTML = `${count} categories will be ${action}`; + elements.selectionCounter.classList.add('has-selection'); + } +} + +function filterCategories(searchTerm) { + state.searchTerm = searchTerm.toLowerCase(); + const chips = document.querySelectorAll('.category-chip'); + + chips.forEach(chip => { + const name = chip.dataset.name.toLowerCase(); + if (name.includes(state.searchTerm)) { + chip.classList.remove('hidden'); + } else { + chip.classList.add('hidden'); + } + }); + + // Also hide empty sections? + document.querySelectorAll('.category-group-wrapper').forEach(wrapper => { + const sectionKey = wrapper.dataset.section; + const visibleChips = wrapper.querySelectorAll('.category-chip:not(.hidden)'); + + if (visibleChips.length === 0) { + wrapper.style.display = 'none'; + } else { + wrapper.style.display = 'block'; + + // Restore grid display if not collapsed + const grid = wrapper.querySelector('.category-section'); + if (grid && !state.collapsedSections.has(sectionKey)) { + // Grid should be visible (css handles grid display usually, but let's ensure) + // The grid class .hidden handles it. If it doesn't have .hidden, it shows. + // But wait, if we previously set style.display = 'none' on the grid directly... } - }); - updateSelectionCounter(); } - break; + }); +} + +// Confirmation & Generation +function showConfirmation() { + const count = state.selectedCategories.size; + elements.confirmationModal.classList.add('active'); + + // Check filter mode again just in case + state.filterMode = document.querySelector('input[name="filterMode"]:checked').value; + const action = state.filterMode === 'include' ? 'Include' : 'Exclude'; + const desc = count === 0 ? 'All Categories' : `${action} ${count} categories`; + + // Check for TV Series selection + let seriesWarning = ''; + const hasSeriesSelected = Array.from(state.selectedCategories).some(name => { + // Find category object to check type + const cat = state.categories.find(c => c.category_name === name); + return cat && cat.content_type === 'series'; + }); + + if (state.credentials.includeVod && (state.filterMode === 'include' && hasSeriesSelected)) { + seriesWarning = ` +
+ ⚠️ +
+ TV Series Selected
+ Fetching episode data is limited by the Xtream API speed.
+ Processing may take a significant amount of time (minutes to hours) depending on the number of series. +
+
+ `; } - } + + elements.modalSummary.innerHTML = ` +
+ Service URL + ${state.credentials.url} +
+
+ Content + ${state.credentials.includeVod ? 'Live TV + VOD' : 'Live TV Only'} +
+
+ Filter Config + ${desc} +
+ ${seriesWarning} + `; +} + +function closeModal() { + elements.confirmationModal.classList.remove('active'); +} + +async function confirmGeneration() { + closeModal(); + showLoading('Generating Playlist...'); + + const requestData = { + ...state.credentials, + nostreamproxy: true, + include_vod: state.credentials.includeVod + }; + + // Remove the original camelCase property to avoid confusion/duplication + delete requestData.includeVod; + + const categories = Array.from(state.selectedCategories); + if (categories.length > 0) { + if (state.filterMode === 'include') { + requestData.wanted_groups = categories.join(','); + } else { + requestData.unwanted_groups = categories.join(','); + } + } + + try { + // Decide method based on payload size + const usePost = categories.length > 10 || JSON.stringify(requestData).length > 1500; + + let response; + if (usePost) { + response = await fetch('/m3u', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(requestData) + }); + } else { + const params = new URLSearchParams(requestData); + response = await fetch(`/m3u?${params}`); + } + + if (!response.ok) throw new Error('Generation failed'); + + const blob = await response.blob(); + const url = window.URL.createObjectURL(blob); + + elements.downloadLink.href = url; + elements.downloadLink.download = state.credentials.includeVod ? 'Full_Playlist.m3u' : 'Live_Playlist.m3u'; + + showStep(3); + + } catch (error) { + console.error(error); + showError('Failed to generate playlist. Please check your inputs and try again.'); + showStep(2); + } finally { + hideLoading(); + } +} + +function startOver() { + // Reset inputs + document.getElementById('url').value = ''; + document.getElementById('username').value = ''; + document.getElementById('password').value = ''; + document.getElementById('includeVod').checked = false; + + // Clear state + state.categories = []; + state.selectedCategories.clear(); + state.searchTerm = ''; + elements.searchInput.value = ''; + + showStep(1); +} + +// Event Listeners +document.addEventListener('DOMContentLoaded', () => { + // Filter mode change + document.querySelectorAll('input[name="filterMode"]').forEach(radio => { + radio.addEventListener('change', updateCounter); + }); + + // Search input + elements.searchInput.addEventListener('input', (e) => { + filterCategories(e.target.value); + }); + + // Close modal on outside click + elements.confirmationModal.addEventListener('click', (e) => { + if (e.target === elements.confirmationModal) closeModal(); + }); + + // Input trim handlers + document.querySelectorAll('input').forEach(input => { + input.addEventListener('blur', (e) => { + if(e.target.type !== 'checkbox' && e.target.type !== 'radio') { + e.target.value = e.target.value.trim(); + } + }); + }); }); diff --git a/frontend/style.css b/frontend/style.css index 3673db8..799f9ca 100644 --- a/frontend/style.css +++ b/frontend/style.css @@ -1,871 +1,741 @@ -@import url("https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap"); +/* Reset & Base Styles */ +@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap'); :root { - --bg-primary: #0a0a0a; - --bg-secondary: #141414; - --bg-tertiary: #1a1a1a; - --bg-card: #1e1e1e; - --bg-elevated: #252525; - --text-primary: #ffffff; - --text-secondary: #a0a0a0; - --text-muted: #666666; - --accent-primary: #3b82f6; - --accent-secondary: #6366f1; - --accent-success: #10b981; - --accent-danger: #ef4444; - --accent-warning: #f59e0b; - --border-primary: #2a2a2a; - --border-secondary: #333333; - --border-accent: #404040; - --shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05); - --shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06); - --shadow-md: 0 4px 6px -1px rgba(0, 0, 0, 0.1), - 0 2px 4px -1px rgba(0, 0, 0, 0.06); - --shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), - 0 4px 6px -2px rgba(0, 0, 0, 0.05); - --shadow-xl: 0 20px 25px -5px rgba(0, 0, 0, 0.1), - 0 10px 10px -5px rgba(0, 0, 0, 0.04); + --bg-primary: #0f172a; + --bg-secondary: #1e293b; + --bg-tertiary: #334155; + --bg-card: rgba(30, 41, 59, 0.7); + --text-primary: #f8fafc; + --text-secondary: #94a3b8; + --text-muted: #64748b; + --accent-primary: #3b82f6; + --accent-hover: #2563eb; + --accent-success: #10b981; + --accent-danger: #ef4444; + --accent-warning: #f59e0b; + --border-color: #334155; + --border-hover: #475569; + --shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05); + --shadow-md: 0 4px 6px -1px rgba(0, 0, 0, 0.1), 0 2px 4px -1px rgba(0, 0, 0, 0.06); + --shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05); + --shadow-xl: 0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04); } * { - box-sizing: border-box; - margin: 0; - padding: 0; + box-sizing: border-box; + margin: 0; + padding: 0; } body { - font-family: "Inter", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, - sans-serif; - line-height: 1.6; - color: var(--text-primary); - background: var(--bg-primary); - min-height: 100vh; - font-feature-settings: "cv11", "ss01"; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; + font-family: 'Inter', -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif; + line-height: 1.6; + color: var(--text-primary); + background: var(--bg-primary); + background-image: + radial-gradient(at 0% 0%, rgba(59, 130, 246, 0.15) 0px, transparent 50%), + radial-gradient(at 100% 0%, rgba(16, 185, 129, 0.15) 0px, transparent 50%); + background-attachment: fixed; + min-height: 100vh; + -webkit-font-smoothing: antialiased; } .container { - max-width: 1000px; - margin: 0 auto; - padding: 1.5rem; + max-width: 900px; + margin: 0 auto; + padding: 2rem 1.5rem; } +/* Header */ .header { - text-align: center; - margin-bottom: 2rem; - padding: 1rem 0; + text-align: center; + margin-bottom: 3rem; + animation: fadeInDown 0.6s ease-out; } .logo { - width: 48px; - height: 48px; - margin: 0 auto 1rem; - border-radius: 12px; - box-shadow: var(--shadow-lg); - position: relative; - overflow: hidden; + width: 64px; + height: 64px; + margin: 0 auto 1.5rem; + border-radius: 16px; + box-shadow: var(--shadow-lg); + position: relative; + background: var(--bg-secondary); + padding: 4px; } .logo img { - width: 100%; - height: 100%; - object-fit: contain; - border-radius: 12px; -} - -.logo::before { - content: ""; - position: absolute; - top: -2px; - left: -2px; - right: -2px; - bottom: -2px; - background: linear-gradient( - 135deg, - var(--accent-primary), - var(--accent-secondary) - ); - border-radius: 14px; - z-index: -1; - opacity: 0.2; - filter: blur(4px); + width: 100%; + height: 100%; + object-fit: contain; + border-radius: 12px; } h1 { - color: var(--text-primary); - font-size: 2.5rem; - font-weight: 700; - margin-bottom: 0.5rem; - letter-spacing: -0.025em; + font-size: 2.5rem; + font-weight: 800; + margin-bottom: 0.5rem; + background: linear-gradient(to right, #fff, #94a3b8); + -webkit-background-clip: text; + -webkit-text-fill-color: transparent; + letter-spacing: -0.025em; } .subtitle { - color: var(--text-secondary); - font-size: 1.125rem; - font-weight: 400; - max-width: 600px; - margin: 0 auto; + color: var(--text-secondary); + font-size: 1.125rem; + max-width: 600px; + margin: 0 auto; } +/* Cards */ .card { - background: var(--bg-card); - border: 1px solid var(--border-primary); - border-radius: 12px; - box-shadow: var(--shadow-lg); - overflow: hidden; - margin-bottom: 1.5rem; - backdrop-filter: blur(20px); + background: var(--bg-card); + border: 1px solid var(--border-color); + border-radius: 16px; + box-shadow: var(--shadow-lg); + backdrop-filter: blur(12px); + margin-bottom: 1.5rem; + overflow: hidden; + transition: transform 0.2s ease, box-shadow 0.2s ease; +} + +.card:hover { + box-shadow: var(--shadow-xl); + border-color: var(--border-hover); } .card-header { - background: linear-gradient( - 135deg, - var(--bg-elevated) 0%, - var(--bg-tertiary) 100% - ); - border-bottom: 1px solid var(--border-secondary); - color: var(--text-primary); - padding: 1rem 1.25rem; - font-size: 1.125rem; - font-weight: 600; - display: flex; - align-items: center; - gap: 0.75rem; + background: rgba(15, 23, 42, 0.3); + border-bottom: 1px solid var(--border-color); + padding: 1.25rem 1.5rem; + font-size: 1.1rem; + font-weight: 600; + display: flex; + align-items: center; + gap: 0.75rem; + color: var(--text-primary); } .card-body { - padding: 1.5rem; + padding: 1.5rem; } +/* Forms */ .form-group { - margin-bottom: 1.25rem; + margin-bottom: 1.5rem; } label { - display: block; - margin-bottom: 0.5rem; - font-weight: 500; - color: var(--text-primary); - font-size: 0.875rem; - letter-spacing: 0.025em; + display: block; + margin-bottom: 0.5rem; + font-weight: 500; + color: var(--text-secondary); + font-size: 0.9rem; } -input, -select, -textarea { - width: 100%; - padding: 0.75rem 1rem; - background: var(--bg-secondary); - border: 1px solid var(--border-secondary); - border-radius: 8px; - font-size: 0.875rem; - color: var(--text-primary); - transition: all 0.2s ease; - font-family: inherit; +input[type="text"], +input[type="url"], +input[type="password"], +select { + width: 100%; + padding: 0.75rem 1rem; + background: var(--bg-secondary); + border: 1px solid var(--border-color); + border-radius: 8px; + color: var(--text-primary); + font-size: 1rem; + transition: all 0.2s ease; +} + +input:focus { + outline: none; + border-color: var(--accent-primary); + box-shadow: 0 0 0 3px rgba(59, 130, 246, 0.2); + background: var(--bg-tertiary); } input::placeholder { - color: var(--text-muted); -} - -input:focus, -select:focus, -textarea:focus { - outline: none; - border-color: var(--accent-primary); - box-shadow: 0 0 0 3px rgba(59, 130, 246, 0.1); - background: var(--bg-tertiary); + color: var(--text-muted); } +/* Checkbox */ .checkbox-wrapper { - margin-top: 0.5rem; + display: flex; + align-items: center; } .checkbox-label { - display: flex; - align-items: flex-start; - gap: 1rem; - cursor: pointer; - padding: 1rem; - background: var(--bg-secondary); - border: 1px solid var(--border-secondary); - border-radius: 8px; - transition: all 0.2s ease; - margin-bottom: 0; + display: flex; + align-items: flex-start; + gap: 1rem; + cursor: pointer; + padding: 1rem; + background: rgba(30, 41, 59, 0.5); + border: 1px solid var(--border-color); + border-radius: 8px; + transition: all 0.2s ease; + width: 100%; } .checkbox-label:hover { - border-color: var(--border-accent); - background: var(--bg-tertiary); + background: var(--bg-tertiary); + border-color: var(--border-hover); } -.checkbox-label input[type="checkbox"] { - display: none; +.checkbox-label input { + display: none; } .checkmark { - width: 20px; - height: 20px; - background: var(--bg-primary); - border: 2px solid var(--border-accent); - border-radius: 4px; - position: relative; - transition: all 0.2s ease; - flex-shrink: 0; - margin-top: 2px; + width: 22px; + height: 22px; + border: 2px solid var(--text-muted); + border-radius: 6px; + position: relative; + flex-shrink: 0; + transition: all 0.2s ease; + margin-top: 2px; +} + +.checkbox-label input:checked + .checkmark { + background: var(--accent-primary); + border-color: var(--accent-primary); } .checkmark::after { - content: ""; - position: absolute; - left: 6px; - top: 2px; - width: 6px; - height: 10px; - border: solid var(--text-primary); - border-width: 0 2px 2px 0; - transform: rotate(45deg) scale(0); - transition: transform 0.2s ease; + content: ''; + position: absolute; + left: 6px; + top: 2px; + width: 6px; + height: 12px; + border: solid white; + border-width: 0 2px 2px 0; + transform: rotate(45deg) scale(0); + transition: transform 0.2s cubic-bezier(0.4, 0, 0.2, 1); } -.checkbox-label input[type="checkbox"]:checked + .checkmark { - background: var(--accent-primary); - border-color: var(--accent-primary); -} - -.checkbox-label input[type="checkbox"]:checked + .checkmark::after { - transform: rotate(45deg) scale(1); - border-color: white; -} - -.checkbox-text { - flex: 1; +.checkbox-label input:checked + .checkmark::after { + transform: rotate(45deg) scale(1); } .checkbox-text strong { - display: block; - color: var(--text-primary); - font-weight: 600; - margin-bottom: 0.25rem; + display: block; + color: var(--text-primary); + margin-bottom: 0.25rem; } .checkbox-text small { - color: var(--text-secondary); - font-size: 0.8rem; - line-height: 1.4; + color: var(--text-secondary); + font-size: 0.85rem; } +/* Buttons */ .btn { - display: inline-flex; - align-items: center; - justify-content: center; - gap: 0.5rem; - padding: 0.75rem 1.5rem; - font-size: 0.875rem; - font-weight: 500; - line-height: 1; - border: none; - border-radius: 8px; - cursor: pointer; - transition: all 0.2s ease; - text-decoration: none; - margin: 0.25rem; - min-height: 42px; - position: relative; - overflow: hidden; + display: inline-flex; + align-items: center; + justify-content: center; + padding: 0.875rem 1.5rem; + border-radius: 8px; + font-weight: 600; + font-size: 0.95rem; + cursor: pointer; + transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1); + border: none; + gap: 0.5rem; + width: 100%; /* Mobile first */ +} + +@media (min-width: 640px) { + .btn { + width: auto; + } } .btn-primary { - background: var(--accent-primary); - color: white; + background: var(--accent-primary); + color: white; } -.btn-primary:hover:not(:disabled) { - background: #2563eb; - transform: translateY(-1px); - box-shadow: var(--shadow-md); -} - -.btn-secondary { - background: var(--bg-elevated); - color: var(--text-primary); - border: 1px solid var(--border-secondary); -} - -.btn-secondary:hover:not(:disabled) { - background: var(--bg-tertiary); - border-color: var(--border-accent); +.btn-primary:hover { + background: var(--accent-hover); + transform: translateY(-1px); + box-shadow: 0 4px 12px rgba(59, 130, 246, 0.3); } .btn-success { - background: var(--accent-success); - color: white; + background: var(--accent-success); + color: white; } -.btn-success:hover:not(:disabled) { - background: #059669; - transform: translateY(-1px); - box-shadow: var(--shadow-md); +.btn-success:hover { + background: #059669; + transform: translateY(-1px); + box-shadow: 0 4px 12px rgba(16, 185, 129, 0.3); } -.btn:disabled { - opacity: 0.5; - cursor: not-allowed; - transform: none; +.btn-secondary { + background: transparent; + border: 1px solid var(--border-color); + color: var(--text-secondary); } -.loading { - display: none; - text-align: center; - padding: 3rem 2rem; - color: var(--text-secondary); +.btn-secondary:hover { + background: var(--bg-tertiary); + color: var(--text-primary); + border-color: var(--text-muted); } -.spinner { - width: 32px; - height: 32px; - border: 2px solid var(--border-primary); - border-top: 2px solid var(--accent-primary); - border-radius: 50%; - animation: spin 1s linear infinite; - margin: 0 auto 1rem; +.btn-text { + background: transparent; + border: none; + color: var(--text-secondary); + font-size: 0.85rem; + cursor: pointer; + padding: 0.25rem 0.5rem; + border-radius: 4px; + transition: all 0.2s; } -@keyframes spin { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(360deg); - } +.btn-text:hover { + color: var(--accent-primary); + background: rgba(59, 130, 246, 0.1); } -.alert { - padding: 1rem; - border-radius: 8px; - margin: 1rem 0; - font-size: 0.875rem; - line-height: 1.5; -} - -.alert-error { - background: rgba(239, 68, 68, 0.1); - color: #fca5a5; - border: 1px solid rgba(239, 68, 68, 0.2); -} - -.alert-success { - background: rgba(16, 185, 129, 0.1); - color: #6ee7b7; - border: 1px solid rgba(16, 185, 129, 0.2); -} - -/* Flow Steps */ +/* Steps */ .step { - display: none; + display: none; + animation: fadeIn 0.4s ease-out; } .step.active { - display: block; + display: block; } -/* Privacy Notice */ -.privacy-notice { - background: rgba(59, 130, 246, 0.1); - border: 1px solid rgba(59, 130, 246, 0.2); - border-radius: 8px; - padding: 1rem; - margin-bottom: 1.5rem; - font-size: 0.875rem; - color: var(--text-secondary); - display: flex; - align-items: center; - gap: 0.75rem; +/* Toolbar & Filters */ +.toolbar { + display: flex; + flex-direction: column; + gap: 1rem; + margin-bottom: 1rem; } -.privacy-notice .icon { - color: var(--accent-primary); - font-size: 1.25rem; - flex-shrink: 0; +@media (min-width: 640px) { + .toolbar { + flex-direction: row; + align-items: stretch; + } } -/* Category Chips */ -.categories-container { - display: none; +.filter-mode { + display: flex; + gap: 0.5rem; + background: var(--bg-secondary); + padding: 0.25rem; + border-radius: 10px; + border: 1px solid var(--border-color); + flex: 1; } +.filter-mode label { + flex: 1; + text-align: center; + padding: 0.6rem; + border-radius: 8px; + cursor: pointer; + margin: 0; + color: var(--text-secondary); + transition: all 0.2s ease; + font-size: 0.9rem; + white-space: nowrap; +} + +.filter-mode label:hover { + background: rgba(255, 255, 255, 0.05); +} + +.filter-mode input { + display: none; +} + +.filter-mode input:checked + span { + color: var(--text-primary); + font-weight: 600; +} + +.filter-mode label:has(input:checked) { + background: var(--bg-tertiary); + box-shadow: var(--shadow-sm); +} + +.search-box { + position: relative; + flex: 1.5; +} + +.search-box input { + padding-right: 2.5rem; + height: 100%; +} + +.search-icon { + position: absolute; + right: 0.8rem; + top: 50%; + transform: translateY(-50%); + opacity: 0.5; + pointer-events: none; +} + +/* Categories */ .category-chips { - max-height: 500px; - overflow-y: auto; - padding: 1rem; - background: var(--bg-secondary); - border-radius: 12px; - border: 1px solid var(--border-primary); + max-height: 500px; + overflow-y: auto; + padding: 0.5rem; + border: 1px solid var(--border-color); + border-radius: 12px; + background: rgba(15, 23, 42, 0.3); } .category-section-header { - display: flex; - align-items: center; - justify-content: space-between; - margin: 2rem 0 1rem 0; - padding: 0.75rem 1rem; - background: var(--bg-tertiary); - border: 1px solid var(--border-secondary); - border-radius: 8px; + display: flex; + align-items: center; + justify-content: space-between; + padding: 1rem 0.5rem; + position: sticky; + top: 0; + background: var(--bg-card); + z-index: 10; + border-bottom: 1px solid var(--border-color); + backdrop-filter: blur(12px); /* Increased blur */ + -webkit-backdrop-filter: blur(12px); + cursor: pointer; + user-select: none; + border-radius: 8px 8px 0 0; + box-shadow: 0 4px 6px -1px rgba(0, 0, 0, 0.1); /* Add shadow for separation */ } -.category-section-header:first-child { - margin-top: 0; +.category-group-wrapper { + position: relative; + margin-bottom: 0.5rem; } .category-section-header h3 { - margin: 0; - font-size: 1.1rem; - font-weight: 600; - color: var(--text-primary); - display: flex; - align-items: center; - gap: 0.5rem; + font-size: 1rem; + color: var(--text-primary); + display: flex; + align-items: center; + gap: 0.5rem; + flex: 1; } -.section-header-actions { - display: flex; - align-items: center; - gap: 0.75rem; +.chevron { + transition: transform 0.2s ease; +} + +.category-section-header.collapsed .chevron { + transform: rotate(-90deg); +} + +.category-section-header.collapsed { + border-bottom-color: transparent; + border-radius: 8px; } .btn-section-select-all { - font-size: 0.75rem; - padding: 0.375rem 0.75rem; - background: var(--accent-primary); - color: white; - border: none; - border-radius: 6px; - cursor: pointer; - font-weight: 500; - transition: all 0.2s ease; + background: var(--bg-tertiary); + border: 1px solid var(--border-color); + color: var(--text-secondary); + font-size: 0.75rem; + padding: 0.25rem 0.75rem; + border-radius: 4px; + cursor: pointer; + transition: all 0.2s; + margin-left: 1rem; } .btn-section-select-all:hover { - background: var(--accent-secondary); - transform: translateY(-1px); -} - -.category-count { - font-size: 0.875rem; - color: var(--text-secondary); - background: var(--bg-secondary); - padding: 0.25rem 0.75rem; - border-radius: 12px; - border: 1px solid var(--border-secondary); + color: var(--text-primary); + border-color: var(--text-muted); } .category-section { - display: flex; - flex-wrap: wrap; - gap: 0.75rem; - margin-bottom: 1.5rem; + display: grid; + grid-template-columns: repeat(auto-fill, minmax(180px, 1fr)); + gap: 0.75rem; + padding: 1rem 0.5rem; +} + +.category-section.hidden { + display: none; } .category-chip { - padding: 0.5rem 1rem; - border-radius: 20px; - border: 2px solid var(--border-secondary); - background: var(--bg-tertiary); - color: var(--text-secondary); - font-size: 0.875rem; - font-weight: 500; - cursor: pointer; - transition: all 0.2s ease; - user-select: none; - position: relative; - overflow: hidden; + background: var(--bg-secondary); + border: 1px solid var(--border-color); + padding: 0.6rem 0.8rem; + border-radius: 8px; + font-size: 0.85rem; + color: var(--text-secondary); + cursor: pointer; + transition: all 0.15s ease; + user-select: none; + text-align: left; + white-space: nowrap; + overflow: hidden; + text-overflow: ellipsis; + position: relative; } .category-chip:hover { - border-color: var(--border-accent); - transform: translateY(-1px); + background: var(--bg-tertiary); + border-color: var(--border-hover); + transform: translateY(-1px); } .category-chip.selected { - background: var(--accent-success); - border-color: var(--accent-success); - color: white; - box-shadow: 0 0 20px rgba(16, 185, 129, 0.3); + background: rgba(16, 185, 129, 0.15); + border-color: var(--accent-success); + color: var(--accent-success); + font-weight: 500; } -.category-chip.selected::before { - content: "✓"; - position: absolute; - left: 0.5rem; - font-weight: bold; +.category-chip.hidden { + display: none; } -.category-chip.selected .chip-text { - margin-left: 1rem; -} - -/* Selection Counter */ .selection-counter { - text-align: center; - padding: 1rem; - margin: 1rem 0; - background: var(--bg-secondary); - border-radius: 8px; - font-weight: 500; + background: var(--bg-secondary); + border: 1px solid var(--border-color); + padding: 0.75rem 1rem; + border-radius: 8px; + display: flex; + align-items: center; + justify-content: space-between; + margin: 1rem 0; + font-size: 0.95rem; + color: var(--text-secondary); + gap: 1rem; + flex-wrap: wrap; } .selection-counter.has-selection { - background: rgba(16, 185, 129, 0.1); - border: 1px solid rgba(16, 185, 129, 0.2); - color: var(--accent-success); + background: rgba(59, 130, 246, 0.1); + border-color: rgba(59, 130, 246, 0.3); + color: var(--accent-primary); + font-weight: 500; } -/* Modal */ -.modal { - display: none; - position: fixed; - top: 0; - left: 0; - width: 100%; - height: 100%; - background: rgba(0, 0, 0, 0.8); - backdrop-filter: blur(8px); - z-index: 1000; - animation: fadeIn 0.3s ease; -} - -.modal.active { - display: flex; - align-items: center; - justify-content: center; -} - -.modal-content { - background: var(--bg-card); - border: 1px solid var(--border-primary); - border-radius: 16px; - padding: 2rem; - max-width: 500px; - width: 90%; - max-height: 80vh; - overflow-y: auto; - box-shadow: var(--shadow-xl); - animation: slideInUp 0.3s ease; -} - -.modal-header { - text-align: center; - margin-bottom: 1.5rem; -} - -.modal-header h3 { - font-size: 1.5rem; - font-weight: 600; - color: var(--text-primary); - margin-bottom: 0.5rem; -} - -.modal-summary { - background: var(--bg-secondary); - border-radius: 8px; - padding: 1rem; - margin: 1rem 0; -} - -.summary-row { - display: flex; - justify-content: space-between; - align-items: center; - padding: 0.5rem 0; - border-bottom: 1px solid var(--border-primary); -} - -.summary-row:last-child { - border-bottom: none; -} - -.summary-label { - font-weight: 500; - color: var(--text-secondary); -} - -.summary-value { - font-weight: 600; - color: var(--text-primary); -} - -.modal-actions { - display: flex; - gap: 1rem; - justify-content: center; - margin-top: 2rem; +.selection-actions { + display: flex; + gap: 0.5rem; } /* Success State */ .success-state { - text-align: center; - padding: 3rem 2rem; + text-align: center; + padding: 3rem 1rem; } .success-checkmark { - width: 80px; - height: 80px; - border-radius: 50%; - background: var(--accent-success); - margin: 0 auto 2rem; - display: flex; - align-items: center; - justify-content: center; - font-size: 2rem; - color: white; - animation: checkmarkBounce 0.6s ease; - box-shadow: 0 0 40px rgba(16, 185, 129, 0.4); + width: 80px; + height: 80px; + background: var(--accent-success); + border-radius: 50%; + display: flex; + align-items: center; + justify-content: center; + font-size: 2.5rem; + color: white; + margin: 0 auto 1.5rem; + box-shadow: 0 0 20px rgba(16, 185, 129, 0.4); + animation: popIn 0.5s cubic-bezier(0.175, 0.885, 0.32, 1.275); } .success-title { - font-size: 2rem; - font-weight: 700; - color: var(--text-primary); - margin-bottom: 1rem; -} - -.success-message { - font-size: 1.125rem; - color: var(--text-secondary); - margin-bottom: 2rem; - max-width: 400px; - margin-left: auto; - margin-right: auto; + font-size: 1.75rem; + margin-bottom: 1rem; + color: var(--text-primary); } .success-actions { - display: flex; - gap: 1rem; - justify-content: center; - flex-wrap: wrap; + display: flex; + flex-direction: column; + gap: 1rem; + margin-top: 2rem; + justify-content: center; } -/* Filter mode for step 2 */ -.filter-mode { - display: flex; - gap: 2rem; - margin-bottom: 1.5rem; - padding: 1rem; - background: var(--bg-secondary); - border-radius: 8px; - border: 1px solid var(--border-primary); - justify-content: center; -} - -.filter-mode label { - display: flex; - align-items: center; - font-weight: 400; - font-size: 0.875rem; - cursor: pointer; - color: var(--text-secondary); - margin: 0; - padding: 0.5rem 1rem; - border-radius: 6px; - transition: all 0.2s ease; -} - -.filter-mode label:hover { - background: var(--bg-tertiary); -} - -.filter-mode input[type="radio"] { - width: 16px; - height: 16px; - margin-right: 0.5rem; - accent-color: var(--accent-primary); -} - -.filter-mode input[type="radio"]:checked + span { - color: var(--text-primary); - font-weight: 500; -} - -.results { - margin-top: 2rem; +@media (min-width: 640px) { + .success-actions { + flex-direction: row; + } } .download-link { - display: inline-flex; - align-items: center; - gap: 0.5rem; - padding: 1rem 2rem; - background: var(--accent-success); - color: white; - text-decoration: none; - border-radius: 8px; - font-weight: 500; - transition: all 0.2s ease; - box-shadow: var(--shadow); -} - -.download-link:hover { - background: #059669; - transform: translateY(-1px); - box-shadow: var(--shadow-lg); -} - -.stats { - display: grid; - grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); - gap: 1rem; - margin: 2rem 0; -} - -.stat-card { - background: var(--bg-secondary); - border: 1px solid var(--border-primary); - border-radius: 8px; - padding: 1.5rem; - text-align: center; -} - -.stat-number { - font-size: 2rem; - font-weight: 700; - color: var(--accent-primary); - margin-bottom: 0.5rem; -} - -.stat-label { - font-size: 0.875rem; - color: var(--text-secondary); - font-weight: 500; -} - -@media (max-width: 768px) { - .container { - padding: 1rem; - } - - h1 { - font-size: 2rem; - } - - .card-body { - padding: 1rem; - } - - .card-header { - padding: 0.75rem 1rem; - } - - .filter-mode { - flex-direction: column; - gap: 1rem; - } - - .stats { - grid-template-columns: 1fr; - } -} - -/* Custom scrollbar for webkit browsers */ -::-webkit-scrollbar { - width: 8px; -} - -::-webkit-scrollbar-track { - background: var(--bg-secondary); - border-radius: 4px; -} - -::-webkit-scrollbar-thumb { - background: var(--border-accent); - border-radius: 4px; -} - -::-webkit-scrollbar-thumb:hover { - background: var(--border-secondary); -} - -/* Focus styles */ -.btn:focus-visible, -input:focus-visible, -select:focus-visible { - outline: 2px solid var(--accent-primary); - outline-offset: 2px; -} - -/* Animation for cards */ -.card { - animation: fadeInUp 0.5s ease-out; -} - -@keyframes fadeInUp { - from { - opacity: 0; - transform: translateY(20px); - } - to { - opacity: 1; - transform: translateY(0); - } -} - -@keyframes fadeIn { - from { - opacity: 0; - } - to { - opacity: 1; - } -} - -@keyframes slideInUp { - from { - opacity: 0; - transform: translateY(30px) scale(0.95); - } - to { - opacity: 1; - transform: translateY(0) scale(1); - } -} - -@keyframes checkmarkBounce { - 0% { - opacity: 0; - transform: scale(0.3); - } - 50% { - opacity: 1; - transform: scale(1.1); - } - 100% { - opacity: 1; - transform: scale(1); - } -} - -/* Responsive improvements */ -@media (max-width: 768px) { - .filter-mode { - flex-direction: column; + text-decoration: none; + display: inline-flex; + align-items: center; + justify-content: center; gap: 0.5rem; - } - - .category-chips { - max-height: 300px; - } - - .modal-content { - padding: 1.5rem; - margin: 1rem; - } - - .modal-actions { - flex-direction: column; - } - - .success-actions { - flex-direction: column; - } +} + +/* Alerts */ +.alert { + padding: 1rem; + border-radius: 8px; + margin: 1rem 0; + font-size: 0.9rem; + display: flex; + align-items: center; + gap: 0.5rem; + animation: slideUp 0.3s ease-out; +} + +.alert-error { + background: rgba(239, 68, 68, 0.1); + border: 1px solid rgba(239, 68, 68, 0.2); + color: #fca5a5; +} + +.alert-warning { + background: rgba(245, 158, 11, 0.1); + border: 1px solid rgba(245, 158, 11, 0.2); + color: #fcd34d; +} + +.alert-success { + background: rgba(16, 185, 129, 0.1); + border: 1px solid rgba(16, 185, 129, 0.2); + color: #6ee7b7; +} + +/* Loading */ +.loading { + display: none; + text-align: center; + padding: 2rem; +} + +.spinner { + width: 40px; + height: 40px; + border: 3px solid rgba(59, 130, 246, 0.3); + border-radius: 50%; + border-top-color: var(--accent-primary); + animation: spin 1s linear infinite; + margin: 0 auto 1rem; +} + +/* Animations */ +@keyframes fadeIn { + from { opacity: 0; } + to { opacity: 1; } +} + +@keyframes fadeInDown { + from { + opacity: 0; + transform: translateY(-20px); + } + to { + opacity: 1; + transform: translateY(0); + } +} + +@keyframes slideUp { + from { + opacity: 0; + transform: translateY(10px); + } + to { + opacity: 1; + transform: translateY(0); + } +} + +@keyframes spin { + to { transform: rotate(360deg); } +} + +@keyframes popIn { + 0% { transform: scale(0); opacity: 0; } + 70% { transform: scale(1.1); } + 100% { transform: scale(1); opacity: 1; } +} + +/* Modal */ +.modal { + display: none; + position: fixed; + top: 0; + left: 0; + width: 100%; + height: 100%; + background: rgba(0, 0, 0, 0.7); + backdrop-filter: blur(4px); + z-index: 50; + align-items: center; + justify-content: center; + padding: 1rem; +} + +.modal.active { + display: flex; + animation: fadeIn 0.2s ease-out; +} + +.modal-content { + background: var(--bg-card); + border: 1px solid var(--border-color); + border-radius: 16px; + padding: 2rem; + max-width: 500px; + width: 100%; + box-shadow: var(--shadow-xl); + animation: slideUp 0.3s cubic-bezier(0.16, 1, 0.3, 1); +} + +.modal-header h3 { + font-size: 1.5rem; + margin-bottom: 0.5rem; + text-align: center; +} + +.modal-summary { + background: var(--bg-secondary); + border-radius: 8px; + padding: 1rem; + margin: 1.5rem 0; + border: 1px solid var(--border-color); +} + +.summary-row { + display: flex; + justify-content: space-between; + padding: 0.5rem 0; + border-bottom: 1px solid var(--border-color); + font-size: 0.9rem; +} + +.summary-row:last-child { + border-bottom: none; +} + +.summary-label { + color: var(--text-secondary); +} + +.summary-value { + color: var(--text-primary); + font-weight: 600; +} + +.modal-actions { + display: flex; + gap: 1rem; + justify-content: flex-end; } diff --git a/run.py b/run.py index bbe5a27..5102105 100644 --- a/run.py +++ b/run.py @@ -1,806 +1,38 @@ -import fnmatch -import ipaddress -import json -import logging -import os -import re -import socket -import time -import urllib.parse -import argparse -from concurrent.futures import ThreadPoolExecutor, as_completed +"""Xtream2M3U - Xtream Codes API to M3U converter -import dns.resolver -import requests -from fake_useragent import UserAgent -from flask import Flask, Response, jsonify, request, send_from_directory +This is the main entry point for the application. +Run with: python run.py [--port PORT] +""" +import argparse +import logging + +from app import create_app +from app.utils import setup_custom_dns # Configure logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) -app = Flask(__name__) - -@app.route("/") -def serve_frontend(): - """Serve the frontend index.html file""" - return send_from_directory("frontend", "index.html") - - -@app.route("/assets/") -def serve_assets(filename): - """Serve assets from the docs/assets directory""" - try: - return send_from_directory("docs/assets", filename) - except: - return "Asset not found", 404 - - -@app.route("/") -def serve_static_files(filename): - """Serve static files from the frontend directory""" - # Don't serve API routes through static file handler - api_routes = ["m3u", "xmltv", "categories", "image-proxy", "stream-proxy", "assets"] - if filename.split("/")[0] in api_routes: - return "Not found", 404 - - # Only serve files that exist in the frontend directory - try: - return send_from_directory("frontend", filename) - except: - # If file doesn't exist in frontend, return 404 - return "File not found", 404 - - -# Get default proxy URL from environment variable -DEFAULT_PROXY_URL = os.environ.get("PROXY_URL") - - -# Set up custom DNS resolver -def setup_custom_dns(): - """Configure a custom DNS resolver using reliable DNS services""" - dns_servers = ["1.1.1.1", "1.0.0.1", "8.8.8.8", "8.8.4.4", "9.9.9.9"] - - custom_resolver = dns.resolver.Resolver() - custom_resolver.nameservers = dns_servers - - original_getaddrinfo = socket.getaddrinfo - - def new_getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): - if host: - try: - # Skip DNS resolution for IP addresses - try: - ipaddress.ip_address(host) - # If we get here, the host is already an IP address - logger.debug(f"Host is already an IP address: {host}, skipping DNS resolution") - except ValueError: - # Not an IP address, so use DNS resolution - answers = custom_resolver.resolve(host) - host = str(answers[0]) - logger.debug(f"Custom DNS resolved {host}") - except Exception as e: - logger.info(f"Custom DNS resolution failed for {host}: {e}, falling back to system DNS") - return original_getaddrinfo(host, port, family, type, proto, flags) - - socket.getaddrinfo = new_getaddrinfo - logger.info("Custom DNS resolver set up") - - -# Initialize DNS resolver -setup_custom_dns() - - -# No persistent connections - fresh connection for each request to avoid stale connection issues - -# Common request function for API endpoints -def fetch_api_data(url, timeout=10): - """Make a request to an API endpoint""" - ua = UserAgent() - headers = { - "User-Agent": ua.chrome, - "Accept": "application/json,text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", - "Accept-Language": "en-US,en;q=0.5", - "Connection": "close", - "Accept-Encoding": "gzip, deflate", - } - - try: - hostname = urllib.parse.urlparse(url).netloc.split(":")[0] - logger.info(f"Making request to host: {hostname}") - - # Use fresh connection for each request to avoid stale connection issues - response = requests.get(url, headers=headers, timeout=timeout, stream=True) - response.raise_for_status() - - # For large responses, use streaming JSON parsing - try: - # Check content length to decide parsing strategy - content_length = response.headers.get('Content-Length') - if content_length and int(content_length) > 10_000_000: # > 10MB - logger.info(f"Large response detected ({content_length} bytes), using optimized parsing") - - # Stream the JSON content for better memory efficiency - response.encoding = 'utf-8' # Ensure proper encoding - return response.json() - except json.JSONDecodeError: - # Fallback to text for non-JSON responses - return response.text - - except requests.exceptions.SSLError: - return {"error": "SSL Error", "details": "Failed to verify SSL certificate"}, 503 - except requests.exceptions.RequestException as e: - logger.error(f"RequestException: {e}") - return {"error": "Request Exception", "details": str(e)}, 503 - - -def stream_request(url, headers=None, timeout=30): - """Make a streaming request that doesn't buffer the full response""" - if not headers: - headers = { - "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36", - "Connection": "keep-alive", - } - - # Use longer timeout for streams and set both connect and read timeouts - return requests.get(url, stream=True, headers=headers, timeout=(10, timeout)) - - -def encode_url(url): - """Safely encode a URL for use in proxy endpoints""" - return urllib.parse.quote(url, safe="") if url else "" - - -def generate_streaming_response(response, content_type=None): - """Generate a streaming response with appropriate headers""" - if not content_type: - content_type = response.headers.get("Content-Type", "application/octet-stream") - - def generate(): - try: - bytes_sent = 0 - for chunk in response.iter_content(chunk_size=8192): - if chunk: - bytes_sent += len(chunk) - yield chunk - logger.info(f"Stream completed, sent {bytes_sent} bytes") - except requests.exceptions.ChunkedEncodingError as e: - # Chunked encoding error from upstream - log and stop gracefully - logger.warning(f"Upstream chunked encoding error after {bytes_sent} bytes: {str(e)}") - # Don't raise - just stop yielding to close stream gracefully - except requests.exceptions.ConnectionError as e: - # Connection error (reset, timeout, etc.) - log and stop gracefully - logger.warning(f"Connection error after {bytes_sent} bytes: {str(e)}") - # Don't raise - just stop yielding to close stream gracefully - except Exception as e: - logger.error(f"Streaming error after {bytes_sent} bytes: {str(e)}") - # Don't raise exceptions in generators after headers are sent! - # Raising here causes Flask to inject "HTTP/1.1 500" into the chunked body, - finally: - # Always close the upstream response to free resources - try: - response.close() - except: - pass - - headers = { - "Access-Control-Allow-Origin": "*", - "Content-Type": content_type, - } - - # Add content length if available and not using chunked transfer - if "Content-Length" in response.headers and "Transfer-Encoding" not in response.headers: - headers["Content-Length"] = response.headers["Content-Length"] - else: - headers["Transfer-Encoding"] = "chunked" - - return Response(generate(), mimetype=content_type, headers=headers, direct_passthrough=True) - - -@app.route("/image-proxy/") -def proxy_image(image_url): - """Proxy endpoint for images to avoid CORS issues""" - try: - original_url = urllib.parse.unquote(image_url) - logger.info(f"Image proxy request for: {original_url}") - - response = requests.get(original_url, stream=True, timeout=10) - response.raise_for_status() - - content_type = response.headers.get("Content-Type", "") - - if not content_type.startswith("image/"): - logger.error(f"Invalid content type for image: {content_type}") - return Response("Invalid image type", status=415) - - return generate_streaming_response(response, content_type) - except requests.Timeout: - return Response("Image fetch timeout", status=504) - except requests.HTTPError as e: - return Response(f"Failed to fetch image: {str(e)}", status=e.response.status_code) - except Exception as e: - logger.error(f"Image proxy error: {str(e)}") - return Response("Failed to process image", status=500) - - -@app.route("/stream-proxy/") -def proxy_stream(stream_url): - """Proxy endpoint for streams""" - try: - original_url = urllib.parse.unquote(stream_url) - logger.info(f"Stream proxy request for: {original_url}") - - response = stream_request(original_url, timeout=60) # Longer timeout for live streams - response.raise_for_status() - - # Determine content type - content_type = response.headers.get("Content-Type") - if not content_type: - if original_url.endswith(".ts"): - content_type = "video/MP2T" - elif original_url.endswith(".m3u8"): - content_type = "application/vnd.apple.mpegurl" - else: - content_type = "application/octet-stream" - - logger.info(f"Using content type: {content_type}") - return generate_streaming_response(response, content_type) - except requests.Timeout: - logger.error(f"Timeout connecting to stream: {original_url}") - return Response("Stream timeout", status=504) - except requests.HTTPError as e: - logger.error(f"HTTP error fetching stream: {e.response.status_code} - {original_url}") - return Response(f"Failed to fetch stream: {str(e)}", status=e.response.status_code) - except Exception as e: - logger.error(f"Stream proxy error: {str(e)} - {original_url}") - return Response("Failed to process stream", status=500) - - -def parse_group_list(group_string): - """Parse a comma-separated string into a list of trimmed strings""" - return [group.strip() for group in group_string.split(",")] if group_string else [] - - -def group_matches(group_title, pattern): - """Check if a group title matches a pattern, supporting wildcards and exact matching""" - # Convert to lowercase for case-insensitive matching - group_lower = group_title.lower() - pattern_lower = pattern.lower() - - # Handle spaces in pattern - if " " in pattern_lower: - # For patterns with spaces, split and check each part - pattern_parts = pattern_lower.split() - group_parts = group_lower.split() - - # If pattern has more parts than group, can't match - if len(pattern_parts) > len(group_parts): - return False - - # Check each part of the pattern against group parts - for i, part in enumerate(pattern_parts): - if i >= len(group_parts): - return False - if "*" in part or "?" in part: - if not fnmatch.fnmatch(group_parts[i], part): - return False - else: - if part not in group_parts[i]: - return False - return True - - # Check for wildcard patterns - if "*" in pattern_lower or "?" in pattern_lower: - return fnmatch.fnmatch(group_lower, pattern_lower) - else: - # Simple substring match for non-wildcard patterns - return pattern_lower in group_lower - - -def get_required_params(): - """Get and validate the required parameters from the request (supports both GET and POST)""" - # Handle both GET and POST requests - if request.method == "POST": - data = request.get_json() or {} - url = data.get("url") - username = data.get("username") - password = data.get("password") - proxy_url = data.get("proxy_url", DEFAULT_PROXY_URL) or request.host_url.rstrip("/") - else: - url = request.args.get("url") - username = request.args.get("username") - password = request.args.get("password") - proxy_url = request.args.get("proxy_url", DEFAULT_PROXY_URL) or request.host_url.rstrip("/") - - if not url or not username or not password: - return ( - None, - None, - None, - None, - jsonify({"error": "Missing Parameters", "details": "Required parameters: url, username, and password"}), - 400 - ) - - return url, username, password, proxy_url, None, None - - -def validate_xtream_credentials(url, username, password): - """Validate the Xtream API credentials""" - api_url = f"{url}/player_api.php?username={username}&password={password}" - data = fetch_api_data(api_url) - - if isinstance(data, tuple): # Error response - return None, data[0], data[1] - - if "user_info" not in data or "server_info" not in data: - return ( - None, - json.dumps( - { - "error": "Invalid Response", - "details": "Server response missing required data (user_info or server_info)", - } - ), - 400, - ) - - return data, None, None - - -def fetch_api_endpoint(url_info): - """Fetch a single API endpoint - used for concurrent requests""" - url, name, timeout = url_info - try: - logger.info(f"🚀 Fetching {name}...") - start_time = time.time() - data = fetch_api_data(url, timeout=timeout) - end_time = time.time() - - if isinstance(data, list): - logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s - got {len(data)} items") - else: - logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s") - return name, data - except Exception as e: - logger.warning(f"❌ Failed to fetch {name}: {e}") - return name, None - -def fetch_categories_and_channels(url, username, password, include_vod=False): - """Fetch categories and channels from the Xtream API using concurrent requests""" - all_categories = [] - all_streams = [] - - try: - # Prepare all API endpoints to fetch concurrently - api_endpoints = [ - (f"{url}/player_api.php?username={username}&password={password}&action=get_live_categories", - "live_categories", 60), - (f"{url}/player_api.php?username={username}&password={password}&action=get_live_streams", - "live_streams", 180), - ] - - # Add VOD endpoints if requested (WARNING: This will be much slower!) - if include_vod: - logger.warning("⚠️ Including VOD content - this will take significantly longer!") - logger.info("💡 For faster loading, use the API without include_vod=true") - - # Only add the most essential VOD endpoints - skip the massive streams for categories-only requests - api_endpoints.extend([ - (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_categories", - "vod_categories", 60), - (f"{url}/player_api.php?username={username}&password={password}&action=get_series_categories", - "series_categories", 60), - ]) - - # Only fetch the massive stream lists if explicitly needed for M3U generation - vod_for_m3u = request.endpoint == 'generate_m3u' - if vod_for_m3u: - logger.warning("🐌 Fetching massive VOD/Series streams for M3U generation...") - api_endpoints.extend([ - (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_streams", - "vod_streams", 240), - (f"{url}/player_api.php?username={username}&password={password}&action=get_series", - "series", 240), - ]) - else: - logger.info("⚡ Skipping massive VOD streams for categories-only request") - - # Fetch all endpoints concurrently using ThreadPoolExecutor - logger.info(f"Starting concurrent fetch of {len(api_endpoints)} API endpoints...") - results = {} - - with ThreadPoolExecutor(max_workers=10) as executor: # Increased workers for better concurrency - # Submit all API calls - future_to_name = {executor.submit(fetch_api_endpoint, endpoint): endpoint[1] - for endpoint in api_endpoints} - - # Collect results as they complete - for future in as_completed(future_to_name): - name, data = future.result() - results[name] = data - - logger.info("All concurrent API calls completed!") - - # Process live categories and streams (required) - live_categories = results.get("live_categories") - live_streams = results.get("live_streams") - - if isinstance(live_categories, tuple): # Error response - return None, None, live_categories[0], live_categories[1] - if isinstance(live_streams, tuple): # Error response - return None, None, live_streams[0], live_streams[1] - - if not isinstance(live_categories, list) or not isinstance(live_streams, list): - return ( - None, - None, - json.dumps( - { - "error": "Invalid Data Format", - "details": "Live categories or streams data is not in the expected format", - } - ), - 500, - ) - - # Optimized data processing - batch operations for massive datasets - logger.info("Processing live content...") - - # Batch set content_type for live content - if live_categories: - for category in live_categories: - category["content_type"] = "live" - all_categories.extend(live_categories) - - if live_streams: - for stream in live_streams: - stream["content_type"] = "live" - all_streams.extend(live_streams) - - logger.info(f"✅ Added {len(live_categories)} live categories and {len(live_streams)} live streams") - - # Process VOD content if requested and available - if include_vod: - logger.info("Processing VOD content...") - - # Process VOD categories - vod_categories = results.get("vod_categories") - if isinstance(vod_categories, list) and vod_categories: - for category in vod_categories: - category["content_type"] = "vod" - all_categories.extend(vod_categories) - logger.info(f"✅ Added {len(vod_categories)} VOD categories") - - # Process series categories first (lightweight) - series_categories = results.get("series_categories") - if isinstance(series_categories, list) and series_categories: - for category in series_categories: - category["content_type"] = "series" - all_categories.extend(series_categories) - logger.info(f"✅ Added {len(series_categories)} series categories") - - # Only process massive stream lists if they were actually fetched - vod_streams = results.get("vod_streams") - if isinstance(vod_streams, list) and vod_streams: - logger.info(f"🔥 Processing {len(vod_streams)} VOD streams (this is the slow part)...") - - # Batch process for better performance - batch_size = 5000 - for i in range(0, len(vod_streams), batch_size): - batch = vod_streams[i:i + batch_size] - for stream in batch: - stream["content_type"] = "vod" - if i + batch_size < len(vod_streams): - logger.info(f" Processed {i + batch_size}/{len(vod_streams)} VOD streams...") - - all_streams.extend(vod_streams) - logger.info(f"✅ Added {len(vod_streams)} VOD streams") - - # Process series (this can also be huge!) - series = results.get("series") - if isinstance(series, list) and series: - logger.info(f"🔥 Processing {len(series)} series (this is also slow)...") - - # Batch process for better performance - batch_size = 5000 - for i in range(0, len(series), batch_size): - batch = series[i:i + batch_size] - for show in batch: - show["content_type"] = "series" - if i + batch_size < len(series): - logger.info(f" Processed {i + batch_size}/{len(series)} series...") - - all_streams.extend(series) - logger.info(f"✅ Added {len(series)} series") - - except Exception as e: - logger.error(f"Critical error fetching API data: {e}") - return ( - None, - None, - json.dumps( - { - "error": "API Fetch Error", - "details": f"Failed to fetch data from IPTV service: {str(e)}", - } - ), - 500, - ) - - logger.info(f"🚀 CONCURRENT FETCH COMPLETE: {len(all_categories)} total categories and {len(all_streams)} total streams") - return all_categories, all_streams, None, None - - -@app.route("/categories", methods=["GET"]) -def get_categories(): - """Get all available categories from the Xtream API""" - # Get and validate parameters - url, username, password, proxy_url, error, status_code = get_required_params() - if error: - return error, status_code - - # Check for VOD parameter - default to false to avoid timeouts (VOD is massive and slow!) - include_vod = request.args.get("include_vod", "false").lower() == "true" - logger.info(f"VOD content requested: {include_vod}") - - # Validate credentials - user_data, error_json, error_code = validate_xtream_credentials(url, username, password) - if error_json: - return error_json, error_code, {"Content-Type": "application/json"} - - # Fetch categories - categories, channels, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod) - if error_json: - return error_json, error_code, {"Content-Type": "application/json"} - - # Return categories as JSON - return json.dumps(categories), 200, {"Content-Type": "application/json"} - - -@app.route("/xmltv", methods=["GET"]) -def generate_xmltv(): - """Generate a filtered XMLTV file from the Xtream API""" - # Get and validate parameters - url, username, password, proxy_url, error, status_code = get_required_params() - if error: - return error, status_code - - # No filtering supported for XMLTV endpoint - - # Validate credentials - user_data, error_json, error_code = validate_xtream_credentials(url, username, password) - if error_json: - return error_json, error_code, {"Content-Type": "application/json"} - - # Fetch XMLTV data - base_url = url.rstrip("/") - xmltv_url = f"{base_url}/xmltv.php?username={username}&password={password}" - xmltv_data = fetch_api_data(xmltv_url, timeout=20) # Longer timeout for XMLTV - - if isinstance(xmltv_data, tuple): # Error response - return json.dumps(xmltv_data[0]), xmltv_data[1], {"Content-Type": "application/json"} - - # If not proxying, return the original XMLTV - if not proxy_url: - return Response( - xmltv_data, mimetype="application/xml", headers={"Content-Disposition": "attachment; filename=guide.xml"} - ) - - # Replace image URLs in the XMLTV content with proxy URLs - def replace_icon_url(match): - original_url = match.group(1) - proxied_url = f"{proxy_url}/image-proxy/{encode_url(original_url)}" - return f' 10 else str(wanted_groups) - unwanted_display = f"{len(unwanted_groups)} groups" if len(unwanted_groups) > 10 else str(unwanted_groups) - logger.info(f"Filter parameters - wanted_groups: {wanted_display}, unwanted_groups: {unwanted_display}, include_vod: {include_vod}") - - # Warn about massive filter lists - total_filters = len(wanted_groups) + len(unwanted_groups) - if total_filters > 20: - logger.warning(f"⚠️ Large filter list detected ({total_filters} categories) - this will be slower!") - if total_filters > 50: - logger.warning(f"🐌 MASSIVE filter list ({total_filters} categories) - expect 3-5 minute processing time!") - - # Validate credentials - user_data, error_json, error_code = validate_xtream_credentials(url, username, password) - if error_json: - return error_json, error_code, {"Content-Type": "application/json"} - - # Fetch categories and channels - categories, streams, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod) - if error_json: - return error_json, error_code, {"Content-Type": "application/json"} - - # Extract user info and server URL - username = user_data["user_info"]["username"] - password = user_data["user_info"]["password"] - - server_url = f"http://{user_data['server_info']['url']}:{user_data['server_info']['port']}" - - # Create category name lookup - category_names = {cat["category_id"]: cat["category_name"] for cat in categories} - - # Log all available groups - all_groups = set(category_names.values()) - logger.info(f"All available groups: {sorted(all_groups)}") - - # Generate M3U playlist - m3u_playlist = "#EXTM3U\n" - - # Track included groups - included_groups = set() - processed_streams = 0 - total_streams = len(streams) - - # Pre-compile filter patterns for massive filter lists (performance optimization) - wanted_patterns = [pattern.lower() for pattern in wanted_groups] if wanted_groups else [] - unwanted_patterns = [pattern.lower() for pattern in unwanted_groups] if unwanted_groups else [] - - logger.info(f"🔍 Starting to filter {total_streams} streams...") - batch_size = 10000 # Process streams in batches for better performance - - for stream in streams: - content_type = stream.get("content_type", "live") - - # Determine group title based on content type - if content_type == "series": - # For series, use series name as group title - group_title = f"Series - {category_names.get(stream.get('category_id'), 'Uncategorized')}" - stream_name = stream.get("name", "Unknown Series") - else: - # For live and VOD content - group_title = category_names.get(stream.get("category_id"), "Uncategorized") - stream_name = stream.get("name", "Unknown") - - # Add content type prefix for VOD - if content_type == "vod": - group_title = f"VOD - {group_title}" - - # Optimized filtering logic using pre-compiled patterns - include_stream = True - group_title_lower = group_title.lower() - - if wanted_patterns: - # Only include streams from specified groups (optimized matching) - include_stream = any( - group_matches(group_title, wanted_group) for wanted_group in wanted_groups - ) - elif unwanted_patterns: - # Exclude streams from unwanted groups (optimized matching) - include_stream = not any( - group_matches(group_title, unwanted_group) for unwanted_group in unwanted_groups - ) - - processed_streams += 1 - - # Progress logging for large datasets - if processed_streams % batch_size == 0: - logger.info(f" 📊 Processed {processed_streams}/{total_streams} streams ({(processed_streams/total_streams)*100:.1f}%)") - - if include_stream: - included_groups.add(group_title) - - tags = [ - f'tvg-name="{stream_name}"', - f'group-title="{group_title}"', - ] - - # Handle logo URL - proxy only if stream proxying is enabled - original_logo = stream.get("stream_icon", "") - if original_logo and not no_stream_proxy: - logo_url = f"{proxy_url}/image-proxy/{encode_url(original_logo)}" - else: - logo_url = original_logo - tags.append(f'tvg-logo="{logo_url}"') - - # Handle channel id if enabled - if include_channel_id: - channel_id = stream.get("epg_channel_id") - if channel_id: - tags.append(f'{channel_id_tag}="{channel_id}"') - - # Create the stream URL based on content type - if content_type == "live": - # Live TV streams - stream_url = f"{server_url}/live/{username}/{password}/{stream['stream_id']}.ts" - elif content_type == "vod": - # VOD streams - stream_url = f"{server_url}/movie/{username}/{password}/{stream['stream_id']}.{stream.get('container_extension', 'mp4')}" - elif content_type == "series": - # Series streams - use the first episode if available - if "episodes" in stream and stream["episodes"]: - first_episode = list(stream["episodes"].values())[0][0] if stream["episodes"] else None - if first_episode: - episode_id = first_episode.get("id", stream.get("series_id", "")) - stream_url = f"{server_url}/series/{username}/{password}/{episode_id}.{first_episode.get('container_extension', 'mp4')}" - else: - continue # Skip series without episodes - else: - # Fallback for series without episode data - series_id = stream.get("series_id", stream.get("stream_id", "")) - stream_url = f"{server_url}/series/{username}/{password}/{series_id}.mp4" - - # Apply stream proxying if enabled - if not no_stream_proxy: - stream_url = f"{proxy_url}/stream-proxy/{encode_url(stream_url)}" - - # Add stream to playlist - m3u_playlist += ( - f'#EXTINF:0 {" ".join(tags)},{stream_name}\n' - ) - m3u_playlist += f"{stream_url}\n" - - # Log included groups after filtering - logger.info(f"Groups included after filtering: {sorted(included_groups)}") - logger.info(f"Groups excluded after filtering: {sorted(all_groups - included_groups)}") - - # Determine filename based on content included - filename = "FullPlaylist.m3u" if include_vod else "LiveStream.m3u" - - logger.info(f"✅ M3U generation complete! Generated playlist with {len(included_groups)} groups") - - # Return the M3U playlist with proper CORS headers for frontend - headers = { - "Content-Disposition": f"attachment; filename={filename}", - "Access-Control-Allow-Origin": "*", - "Access-Control-Allow-Methods": "GET, POST, OPTIONS", - "Access-Control-Allow-Headers": "Content-Type" - } - - return Response(m3u_playlist, mimetype="audio/x-scpls", headers=headers) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Run the Flask app.") +def main(): + """Main entry point for the application""" + # Parse command line arguments + parser = argparse.ArgumentParser(description="Run the Xtream2M3U Flask app.") parser.add_argument( - "--port", type=int, default=5000, help="Port number to run the app on" + "--port", type=int, default=5000, help="Port number to run the app on (default: 5000)" ) args = parser.parse_args() + # Initialize custom DNS resolver + setup_custom_dns() + + # Create the Flask app + app = create_app() + + # Run the app + logger.info(f"Starting Xtream2M3U server on port {args.port}") app.run(debug=True, host="0.0.0.0", port=args.port) + + +if __name__ == "__main__": + main()