@@ -32,169 +33,141 @@
## About
-xtream2m3u is a powerful and flexible tool designed to bridge the gap between Xtream API-based IPTV services and M3U playlist-compatible media players. It provides a simple API that fetches live streams from Xtream IPTV services, filters out unwanted channel groups, and generates a customized M3U playlist file.
+**xtream2m3u** is a powerful and flexible tool designed to bridge the gap between Xtream API-based IPTV services and M3U playlist-compatible media players. It offers a **user-friendly web interface** and a **comprehensive API** to generate customized playlists.
-### Why xtream2m3u?
+Many IPTV providers use the Xtream API, which isn't directly compatible with all players. xtream2m3u allows you to:
+1. Connect to your Xtream IPTV provider.
+2. Select exactly which channel groups (Live TV) or VOD categories (Movies/Series) you want.
+3. Generate a standard M3U playlist compatible with almost any player (VLC, TiviMate, Televizo, etc.).
-Many IPTV providers use the Xtream API, which isn't directly compatible with media players that accept M3U playlists. xtream2m3u solves this problem by:
+## Features
-1. Connecting to Xtream API-based IPTV services
-2. Fetching the list of available live streams
-3. Allowing users to filter channels by including only wanted groups or excluding unwanted groups
-4. Generating a standard M3U playlist that's compatible with a wide range of media players
+* **Web Interface:** Easy-to-use UI for managing credentials and selecting categories.
+* **Custom Playlists:** Filter channels by including or excluding specific groups.
+* **VOD Support:** Optionally include Movies and Series in your playlist.
+* **Stream Proxying:** built-in proxy to handle CORS issues or hide upstream URLs.
+* **Custom DNS:** Uses reliable DNS resolvers (Cloudflare, Google) to ensure connection stability.
+* **XMLTV EPG:** Generates a compatible XMLTV guide for your playlist.
+* **Docker Ready:** Simple deployment with Docker and Docker Compose.
## Prerequisites
To use xtream2m3u, you'll need:
+* An active subscription to an IPTV service that uses the Xtream API.
-- An active subscription to an IPTV service that uses the Xtream API
-
-For deployment, you'll need one of the following:
-
-- Docker and Docker Compose
-- Python 3.12 or higher
-
-## Environment Variables
-
-The application supports the following environment variables:
-
-- `PROXY_URL`: [Optional] Set a default custom base URL for all proxied content (can be overridden with the `proxy_url` parameter)
+For deployment:
+* **Docker & Docker Compose** (Recommended)
+* OR **Python 3.9+**
## Installation
### Using Docker (Recommended)
-1. Install Docker and Docker Compose
-2. Clone the repository:
- ```
- git clone https://github.com/ovosimpatico/xtream2m3u.git
- cd xtream2m3u
- ```
-3. Run the application:
- ```
- docker-compose up -d
- ```
+1. Clone the repository:
+ ```bash
+ git clone https://github.com/ovosimpatico/xtream2m3u.git
+ cd xtream2m3u
+ ```
+2. Run the application:
+ ```bash
+ docker-compose up -d
+ ```
+3. Open your browser and navigate to `http://localhost:5000`.
### Native Python Installation
-1. Install Python (3.9 or higher)
-2. Clone the repository:
- ```
- git clone https://github.com/ovosimpatico/xtream2m3u.git
- cd xtream2m3u
- ```
-3. Create a virtual environment:
- ```
- python -m venv venv
- source venv/bin/activate # On Windows, use `venv\Scripts\activate`
- ```
-4. Install the required packages:
- ```
- pip install -r requirements.txt
- ```
-5. Run the application:
- ```
- python run.py
- ```
+1. Clone the repository and enter the directory:
+ ```bash
+ git clone https://github.com/ovosimpatico/xtream2m3u.git
+ cd xtream2m3u
+ ```
+2. Create and activate a virtual environment:
+ ```bash
+ python -m venv venv
+ source venv/bin/activate # On Windows: venv\Scripts\activate
+ ```
+3. Install dependencies:
+ ```bash
+ pip install -r requirements.txt
+ ```
+4. Run the server:
+ ```bash
+ python run.py
+ ```
+5. Open your browser and navigate to `http://localhost:5000`.
## Usage
-### API Endpoints
+### Web Interface
+The easiest way to use xtream2m3u is via the web interface at `http://localhost:5000`.
+1. **Enter Credentials:** Input your IPTV provider's URL, username, and password.
+2. **Select Content:** Choose whether to include VOD (Movies & Series).
+3. **Filter Categories:** Load categories and select which ones to include or exclude.
+4. **Generate:** Click "Generate Playlist" to download your custom M3U file.
-The application provides several endpoints for generating playlists and proxying media:
+### Environment Variables
+* `PROXY_URL`: [Optional] Set a custom base URL for proxied content (useful if running behind a reverse proxy).
+* `PORT`: [Optional] Port to run the server on (default: 5000).
-#### M3U Playlist Generation
+## API Documentation
+For advanced users or automation, you can use the API endpoints directly.
+
+### 1. Generate M3U Playlist
+`GET /m3u` or `POST /m3u`
+
+| Parameter | Type | Required | Description |
+| :--- | :--- | :--- | :--- |
+| `url` | string | Yes | IPTV Service URL |
+| `username` | string | Yes | IPTV Username |
+| `password` | string | Yes | IPTV Password |
+| `unwanted_groups` | string | No | Comma-separated list of groups to **exclude** |
+| `wanted_groups` | string | No | Comma-separated list of groups to **include** (takes precedence) |
+| `include_vod` | boolean | No | Set `true` to include Movies & Series (default: `false`) |
+| `nostreamproxy` | boolean | No | Set `true` to disable stream proxying (direct links) |
+| `proxy_url` | string | No | Custom base URL for proxied streams |
+| `include_channel_id` | boolean | No | Set `true` to include `epg_channel_id` tag |
+| `channel_id_tag` | string | No | Custom tag name for channel ID (default: `channel-id`) |
+
+**Wildcard Support:** `unwanted_groups` and `wanted_groups` support `*` (wildcard) and `?` (single char).
+* Example: `*Sports*` matches "Sky Sports", "BeIN Sports", etc.
+
+**Example:**
```
-GET /m3u
+http://localhost:5000/m3u?url=http://iptv.com&username=user&password=pass&wanted_groups=Sports*,News&include_vod=true
```
-##### Query Parameters
+### 2. Generate XMLTV Guide
+`GET /xmltv`
-- `url` (required): The base URL of your IPTV service
-- `username` (required): Your IPTV service username
-- `password` (required): Your IPTV service password
-- `unwanted_groups` (optional): A comma-separated list of group names to exclude
-- `wanted_groups` (optional): A comma-separated list of group names to include (takes precedence over unwanted_groups)
-- `nostreamproxy` (optional): Set to 'true' to disable stream proxying
-- `proxy_url` (optional): Custom base URL for proxied content (overrides auto-detection)
-- `include_channel_id` (optional): Set to 'true' to include `epg_channel_id` in M3U, useful for [Channels](https://getchannels.com)
-- `channel_id_tag` (optional): Name of the tag to use for `epg_channel_id` data in M3U, defaults to `channel-id`
+| Parameter | Type | Required | Description |
+| :--- | :--- | :--- | :--- |
+| `url` | string | Yes | IPTV Service URL |
+| `username` | string | Yes | IPTV Username |
+| `password` | string | Yes | IPTV Password |
+| `proxy_url` | string | No | Custom base URL for proxied images |
-Note: For `unwanted_groups` and `wanted_groups`, you can use wildcard patterns with `*` and `?` characters. For example:
-- `US*` will match all groups starting with "US"
-- `*Sports*` will match any group containing "Sports"
-- `US| ?/?/?` will match groups like "US| 24/7"
+### 3. Get Categories
+`GET /categories`
-##### Example Request
+Returns a JSON list of all available categories.
-```
-http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&unwanted_groups=news,sports
-```
+| Parameter | Type | Required | Description |
+| :--- | :--- | :--- | :--- |
+| `url` | string | Yes | IPTV Service URL |
+| `username` | string | Yes | IPTV Username |
+| `password` | string | Yes | IPTV Password |
+| `include_vod` | boolean | No | Set `true` to include VOD categories |
-Or to only include specific groups:
-
-```
-http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&wanted_groups=movies,series
-```
-
-With a custom proxy URL:
-
-```
-http://localhost:5000/m3u?url=http://your-iptv-service.com&username=your_username&password=your_password&proxy_url=https://your-public-domain.com
-```
-
-#### XMLTV Guide Generation
-
-```
-GET /xmltv
-```
-
-##### Query Parameters
-
-- `url` (required): The base URL of your IPTV service
-- `username` (required): Your IPTV service username
-- `password` (required): Your IPTV service password
-- `proxy_url` (optional): Custom base URL for proxied content (overrides auto-detection)
-
-
-##### Example Request
-
-```
-http://localhost:5000/xmltv?url=http://your-iptv-service.com&username=your_username&password=your_password
-```
-
-With a custom proxy URL:
-
-```
-http://localhost:5000/xmltv?url=http://your-iptv-service.com&username=your_username&password=your_password&proxy_url=https://your-public-domain.com
-```
-
-#### Image Proxy
-
-```
-GET /image-proxy/
-```
-
-Proxies image requests, like channel logos and EPG images.
-
-#### Stream Proxy
-
-```
-GET /stream-proxy/
-```
-
-Proxies video streams. Supports the following formats:
-- MPEG-TS (.ts)
-- HLS (.m3u8)
-- Generic video streams
+### 4. Proxy Endpoints
+* `GET /image-proxy/`: Proxies images (logos, covers).
+* `GET /stream-proxy/`: Proxies video streams.
## License
-This project is licensed under the GNU Affero General Public License v3.0 (AGPLv3). This license requires that any modifications to the code must also be made available under the same license, even when the software is run as a service (e.g., over a network). See the [LICENSE](LICENSE) file for details.
+This project is licensed under the **GNU Affero General Public License v3.0 (AGPLv3)**.
+See the [LICENSE](LICENSE) file for details.
## Disclaimer
-xtream2m3u is a tool for generating M3U playlists from Xtream API-based IPTV services but does not provide IPTV services itself. A valid subscription to an IPTV service using the Xtream API is required to use this tool.
-
-xtream2m3u does not endorse piracy and requires users to ensure they have the necessary rights and permissions. The developers are not responsible for any misuse of the software or violations of IPTV providers' terms of service.
\ No newline at end of file
+xtream2m3u is a tool for managing your own legal IPTV subscriptions. It **does not** provide any content, channels, or streams. The developers are not responsible for how this tool is used.
diff --git a/app/__init__.py b/app/__init__.py
new file mode 100644
index 0000000..1df9591
--- /dev/null
+++ b/app/__init__.py
@@ -0,0 +1,32 @@
+"""Flask application factory and configuration"""
+import logging
+import os
+
+from flask import Flask
+
+# Configure logging
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+
+def create_app():
+ """Create and configure the Flask application"""
+ app = Flask(__name__,
+ static_folder='../frontend',
+ template_folder='../frontend')
+
+ # Get default proxy URL from environment variable
+ app.config['DEFAULT_PROXY_URL'] = os.environ.get("PROXY_URL")
+
+ # Register blueprints
+ from app.routes.api import api_bp
+ from app.routes.proxy import proxy_bp
+ from app.routes.static import static_bp
+
+ app.register_blueprint(static_bp)
+ app.register_blueprint(proxy_bp)
+ app.register_blueprint(api_bp)
+
+ logger.info("Flask application created and configured")
+
+ return app
diff --git a/app/routes/__init__.py b/app/routes/__init__.py
new file mode 100644
index 0000000..a9cfe05
--- /dev/null
+++ b/app/routes/__init__.py
@@ -0,0 +1,6 @@
+"""Routes package - Register blueprints here"""
+from .api import api_bp
+from .proxy import proxy_bp
+from .static import static_bp
+
+__all__ = ['static_bp', 'proxy_bp', 'api_bp']
diff --git a/app/routes/api.py b/app/routes/api.py
new file mode 100644
index 0000000..ea1e1b5
--- /dev/null
+++ b/app/routes/api.py
@@ -0,0 +1,208 @@
+"""API routes for Xtream Codes proxy (categories, M3U, XMLTV)"""
+import json
+import logging
+import os
+import re
+
+from flask import Blueprint, Response, current_app, jsonify, request
+
+from app.services import (
+ fetch_api_data,
+ fetch_categories_and_channels,
+ generate_m3u_playlist,
+ validate_xtream_credentials,
+)
+from app.utils import encode_url, parse_group_list
+
+logger = logging.getLogger(__name__)
+
+api_bp = Blueprint('api', __name__)
+
+
+def get_required_params():
+ """Get and validate the required parameters from the request (supports both GET and POST)"""
+ # Handle both GET and POST requests
+ if request.method == "POST":
+ data = request.get_json() or {}
+ url = data.get("url")
+ username = data.get("username")
+ password = data.get("password")
+ proxy_url = data.get("proxy_url", current_app.config['DEFAULT_PROXY_URL']) or request.host_url.rstrip("/")
+ else:
+ url = request.args.get("url")
+ username = request.args.get("username")
+ password = request.args.get("password")
+ proxy_url = request.args.get("proxy_url", current_app.config['DEFAULT_PROXY_URL']) or request.host_url.rstrip("/")
+
+ if not url or not username or not password:
+ return (
+ None,
+ None,
+ None,
+ None,
+ jsonify({"error": "Missing Parameters", "details": "Required parameters: url, username, and password"}),
+ 400
+ )
+
+ return url, username, password, proxy_url, None, None
+
+
+@api_bp.route("/categories", methods=["GET"])
+def get_categories():
+ """Get all available categories from the Xtream API"""
+ # Get and validate parameters
+ url, username, password, proxy_url, error, status_code = get_required_params()
+ if error:
+ return error, status_code
+
+ # Check for VOD parameter - default to false to avoid timeouts (VOD is massive and slow!)
+ include_vod = request.args.get("include_vod", "false").lower() == "true"
+ logger.info(f"VOD content requested: {include_vod}")
+
+ # Validate credentials
+ user_data, error_json, error_code = validate_xtream_credentials(url, username, password)
+ if error_json:
+ return error_json, error_code, {"Content-Type": "application/json"}
+
+ # Fetch categories
+ categories, channels, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod)
+ if error_json:
+ return error_json, error_code, {"Content-Type": "application/json"}
+
+ # Return categories as JSON
+ return json.dumps(categories), 200, {"Content-Type": "application/json"}
+
+
+@api_bp.route("/xmltv", methods=["GET"])
+def generate_xmltv():
+ """Generate a filtered XMLTV file from the Xtream API"""
+ # Get and validate parameters
+ url, username, password, proxy_url, error, status_code = get_required_params()
+ if error:
+ return error, status_code
+
+ # No filtering supported for XMLTV endpoint
+
+ # Validate credentials
+ user_data, error_json, error_code = validate_xtream_credentials(url, username, password)
+ if error_json:
+ return error_json, error_code, {"Content-Type": "application/json"}
+
+ # Fetch XMLTV data
+ base_url = url.rstrip("/")
+ xmltv_url = f"{base_url}/xmltv.php?username={username}&password={password}"
+ xmltv_data = fetch_api_data(xmltv_url, timeout=20) # Longer timeout for XMLTV
+
+ if isinstance(xmltv_data, tuple): # Error response
+ return json.dumps(xmltv_data[0]), xmltv_data[1], {"Content-Type": "application/json"}
+
+ # If not proxying, return the original XMLTV
+ if not proxy_url:
+ return Response(
+ xmltv_data, mimetype="application/xml", headers={"Content-Disposition": "attachment; filename=guide.xml"}
+ )
+
+ # Replace image URLs in the XMLTV content with proxy URLs
+ def replace_icon_url(match):
+ original_url = match.group(1)
+ proxied_url = f"{proxy_url}/image-proxy/{encode_url(original_url)}"
+ return f' 10 else str(wanted_groups)
+ unwanted_display = f"{len(unwanted_groups)} groups" if len(unwanted_groups) > 10 else str(unwanted_groups)
+ logger.info(f"Filter parameters - wanted_groups: {wanted_display}, unwanted_groups: {unwanted_display}, include_vod: {include_vod}")
+
+ # Warn about massive filter lists
+ total_filters = len(wanted_groups) + len(unwanted_groups)
+ if total_filters > 20:
+ logger.warning(f"⚠️ Large filter list detected ({total_filters} categories) - this will be slower!")
+ if total_filters > 50:
+ logger.warning(f"🐌 MASSIVE filter list ({total_filters} categories) - expect 3-5 minute processing time!")
+
+ # Validate credentials
+ user_data, error_json, error_code = validate_xtream_credentials(url, username, password)
+ if error_json:
+ return error_json, error_code, {"Content-Type": "application/json"}
+
+ # Fetch categories and channels
+ categories, streams, error_json, error_code = fetch_categories_and_channels(url, username, password, include_vod)
+ if error_json:
+ return error_json, error_code, {"Content-Type": "application/json"}
+
+ # Extract user info and server URL
+ username = user_data["user_info"]["username"]
+ password = user_data["user_info"]["password"]
+
+ server_url = f"http://{user_data['server_info']['url']}:{user_data['server_info']['port']}"
+
+ # Generate M3U playlist
+ m3u_playlist = generate_m3u_playlist(
+ url=url,
+ username=username,
+ password=password,
+ server_url=server_url,
+ categories=categories,
+ streams=streams,
+ wanted_groups=wanted_groups,
+ unwanted_groups=unwanted_groups,
+ no_stream_proxy=no_stream_proxy,
+ include_vod=include_vod,
+ include_channel_id=include_channel_id,
+ channel_id_tag=channel_id_tag,
+ proxy_url=proxy_url
+ )
+
+ # Determine filename based on content included
+ filename = "FullPlaylist.m3u" if include_vod else "LiveStream.m3u"
+
+ # Return the M3U playlist with proper CORS headers for frontend
+ headers = {
+ "Content-Disposition": f"attachment; filename={filename}",
+ "Access-Control-Allow-Origin": "*",
+ "Access-Control-Allow-Methods": "GET, POST, OPTIONS",
+ "Access-Control-Allow-Headers": "Content-Type"
+ }
+
+ return Response(m3u_playlist, mimetype="audio/x-scpls", headers=headers)
diff --git a/app/routes/proxy.py b/app/routes/proxy.py
new file mode 100644
index 0000000..55409d9
--- /dev/null
+++ b/app/routes/proxy.py
@@ -0,0 +1,71 @@
+"""Proxy routes for images and streams"""
+import logging
+import urllib.parse
+
+import requests
+from flask import Blueprint, Response
+
+from app.utils.streaming import generate_streaming_response, stream_request
+
+logger = logging.getLogger(__name__)
+
+proxy_bp = Blueprint('proxy', __name__)
+
+
+@proxy_bp.route("/image-proxy/")
+def proxy_image(image_url):
+ """Proxy endpoint for images to avoid CORS issues"""
+ try:
+ original_url = urllib.parse.unquote(image_url)
+ logger.info(f"Image proxy request for: {original_url}")
+
+ response = requests.get(original_url, stream=True, timeout=10)
+ response.raise_for_status()
+
+ content_type = response.headers.get("Content-Type", "")
+
+ if not content_type.startswith("image/"):
+ logger.error(f"Invalid content type for image: {content_type}")
+ return Response("Invalid image type", status=415)
+
+ return generate_streaming_response(response, content_type)
+ except requests.Timeout:
+ return Response("Image fetch timeout", status=504)
+ except requests.HTTPError as e:
+ return Response(f"Failed to fetch image: {str(e)}", status=e.response.status_code)
+ except Exception as e:
+ logger.error(f"Image proxy error: {str(e)}")
+ return Response("Failed to process image", status=500)
+
+
+@proxy_bp.route("/stream-proxy/")
+def proxy_stream(stream_url):
+ """Proxy endpoint for streams"""
+ try:
+ original_url = urllib.parse.unquote(stream_url)
+ logger.info(f"Stream proxy request for: {original_url}")
+
+ response = stream_request(original_url, timeout=60) # Longer timeout for live streams
+ response.raise_for_status()
+
+ # Determine content type
+ content_type = response.headers.get("Content-Type")
+ if not content_type:
+ if original_url.endswith(".ts"):
+ content_type = "video/MP2T"
+ elif original_url.endswith(".m3u8"):
+ content_type = "application/vnd.apple.mpegurl"
+ else:
+ content_type = "application/octet-stream"
+
+ logger.info(f"Using content type: {content_type}")
+ return generate_streaming_response(response, content_type)
+ except requests.Timeout:
+ logger.error(f"Timeout connecting to stream: {original_url}")
+ return Response("Stream timeout", status=504)
+ except requests.HTTPError as e:
+ logger.error(f"HTTP error fetching stream: {e.response.status_code} - {original_url}")
+ return Response(f"Failed to fetch stream: {str(e)}", status=e.response.status_code)
+ except Exception as e:
+ logger.error(f"Stream proxy error: {str(e)} - {original_url}")
+ return Response("Failed to process stream", status=500)
diff --git a/app/routes/static.py b/app/routes/static.py
new file mode 100644
index 0000000..80d55f8
--- /dev/null
+++ b/app/routes/static.py
@@ -0,0 +1,45 @@
+"""Static file and frontend routes"""
+import logging
+import os
+
+from flask import Blueprint, send_from_directory
+
+logger = logging.getLogger(__name__)
+
+static_bp = Blueprint('static', __name__)
+
+# Get the base directory (project root)
+BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
+FRONTEND_DIR = os.path.join(BASE_DIR, 'frontend')
+ASSETS_DIR = os.path.join(BASE_DIR, 'docs', 'assets')
+
+
+@static_bp.route("/")
+def serve_frontend():
+ """Serve the frontend index.html file"""
+ return send_from_directory(FRONTEND_DIR, "index.html")
+
+
+@static_bp.route("/assets/")
+def serve_assets(filename):
+ """Serve assets from the docs/assets directory"""
+ try:
+ return send_from_directory(ASSETS_DIR, filename)
+ except:
+ return "Asset not found", 404
+
+
+@static_bp.route("/")
+def serve_static_files(filename):
+ """Serve static files from the frontend directory"""
+ # Don't serve API routes through static file handler
+ api_routes = ["m3u", "xmltv", "categories", "image-proxy", "stream-proxy", "assets"]
+ if filename.split("/")[0] in api_routes:
+ return "Not found", 404
+
+ # Only serve files that exist in the frontend directory
+ try:
+ return send_from_directory(FRONTEND_DIR, filename)
+ except:
+ # If file doesn't exist in frontend, return 404
+ return "File not found", 404
diff --git a/app/services/__init__.py b/app/services/__init__.py
new file mode 100644
index 0000000..13ea94c
--- /dev/null
+++ b/app/services/__init__.py
@@ -0,0 +1,16 @@
+"""Services package"""
+from .m3u_generator import generate_m3u_playlist
+from .xtream_api import (
+ fetch_api_data,
+ fetch_categories_and_channels,
+ fetch_series_episodes,
+ validate_xtream_credentials,
+)
+
+__all__ = [
+ 'fetch_api_data',
+ 'validate_xtream_credentials',
+ 'fetch_categories_and_channels',
+ 'fetch_series_episodes',
+ 'generate_m3u_playlist'
+]
diff --git a/app/services/m3u_generator.py b/app/services/m3u_generator.py
new file mode 100644
index 0000000..5d4c59c
--- /dev/null
+++ b/app/services/m3u_generator.py
@@ -0,0 +1,250 @@
+"""M3U playlist generation service"""
+import logging
+from concurrent.futures import ThreadPoolExecutor, as_completed
+
+from app.services.xtream_api import fetch_series_episodes
+from app.utils import encode_url, group_matches
+
+logger = logging.getLogger(__name__)
+
+
+def generate_m3u_playlist(
+ url,
+ username,
+ password,
+ server_url,
+ categories,
+ streams,
+ wanted_groups=None,
+ unwanted_groups=None,
+ no_stream_proxy=False,
+ include_vod=False,
+ include_channel_id=False,
+ channel_id_tag="channel-id",
+ proxy_url=None
+):
+ """
+ Generate an M3U playlist from Xtream API data
+
+ Args:
+ url: Xtream API base URL
+ username: Xtream API username
+ password: Xtream API password
+ server_url: Server URL for streaming
+ categories: List of categories
+ streams: List of streams
+ wanted_groups: List of group patterns to include (optional)
+ unwanted_groups: List of group patterns to exclude (optional)
+ no_stream_proxy: Whether to disable stream proxying
+ include_vod: Whether VOD content is included
+ include_channel_id: Whether to include channel IDs
+ channel_id_tag: Tag name for channel IDs
+ proxy_url: Proxy URL for images and streams
+
+ Returns:
+ M3U playlist string
+ """
+ # Create category name lookup
+ category_names = {cat["category_id"]: cat["category_name"] for cat in categories}
+
+ # Log all available groups
+ all_groups = set(category_names.values())
+ logger.info(f"All available groups: {sorted(all_groups)}")
+
+ # Generate M3U playlist
+ m3u_playlist = "#EXTM3U\n"
+
+ # Track included groups
+ included_groups = set()
+ processed_streams = 0
+ total_streams = len(streams)
+
+ # Pre-compile filter patterns for massive filter lists (performance optimization)
+ wanted_patterns = [pattern.lower() for pattern in wanted_groups] if wanted_groups else []
+ unwanted_patterns = [pattern.lower() for pattern in unwanted_groups] if unwanted_groups else []
+
+ logger.info(f"🔍 Starting to filter {total_streams} streams...")
+ batch_size = 10000 # Process streams in batches for better performance
+
+ # Filter series to fetch episodes for (optimization to avoid fetching episodes for excluded series)
+ series_episodes_map = {}
+ if include_vod:
+ series_streams = [s for s in streams if s.get("content_type") == "series"]
+ if series_streams:
+ logger.info(f"Found {len(series_streams)} series. Filtering to determine which need episodes...")
+
+ series_to_fetch = []
+ for stream in series_streams:
+ # Get raw category name for filtering
+ category_name = category_names.get(stream.get('category_id'), 'Uncategorized')
+
+ # Calculate group_title (prefixed)
+ group_title = f"Series - {category_name}"
+
+ # Check filter against both raw category name and prefixed name
+ # This ensures we match "Action" (raw) AND "Series - Action" (prefixed)
+ should_fetch = True
+ if wanted_patterns:
+ should_fetch = any(
+ group_matches(category_name, w) or group_matches(group_title, w)
+ for w in wanted_groups
+ )
+ elif unwanted_patterns:
+ should_fetch = not any(
+ group_matches(category_name, u) or group_matches(group_title, u)
+ for u in unwanted_groups
+ )
+
+ if should_fetch:
+ series_to_fetch.append(stream)
+
+ if series_to_fetch:
+ logger.info(f"Fetching episodes for {len(series_to_fetch)} series (this might take a while)...")
+
+ with ThreadPoolExecutor(max_workers=5) as executor:
+ future_to_series = {
+ executor.submit(fetch_series_episodes, url, username, password, s.get("series_id")): s.get("series_id")
+ for s in series_to_fetch
+ }
+
+ completed_fetches = 0
+ for future in as_completed(future_to_series):
+ s_id, episodes = future.result()
+ if episodes:
+ series_episodes_map[s_id] = episodes
+
+ completed_fetches += 1
+ if completed_fetches % 50 == 0:
+ logger.info(f" Fetched episodes for {completed_fetches}/{len(series_to_fetch)} series")
+
+ logger.info(f"✅ Fetched episodes for {len(series_episodes_map)} series")
+
+ for stream in streams:
+ content_type = stream.get("content_type", "live")
+
+ # Get raw category name
+ category_name = category_names.get(stream.get("category_id"), "Uncategorized")
+
+ # Determine group title based on content type
+ if content_type == "series":
+ # For series, use series name as group title
+ group_title = f"Series - {category_name}"
+ stream_name = stream.get("name", "Unknown Series")
+ else:
+ # For live and VOD content
+ group_title = category_name
+ stream_name = stream.get("name", "Unknown")
+
+ # Add content type prefix for VOD
+ if content_type == "vod":
+ group_title = f"VOD - {category_name}"
+
+ # Optimized filtering logic using pre-compiled patterns
+ include_stream = True
+
+ if wanted_patterns:
+ # Only include streams from specified groups (optimized matching)
+ # Check both raw category name and final group title to support flexible filtering
+ include_stream = any(
+ group_matches(category_name, wanted_group) or group_matches(group_title, wanted_group)
+ for wanted_group in wanted_groups
+ )
+ elif unwanted_patterns:
+ # Exclude streams from unwanted groups (optimized matching)
+ include_stream = not any(
+ group_matches(category_name, unwanted_group) or group_matches(group_title, unwanted_group)
+ for unwanted_group in unwanted_groups
+ )
+
+ processed_streams += 1
+
+ # Progress logging for large datasets
+ if processed_streams % batch_size == 0:
+ logger.info(f" 📊 Processed {processed_streams}/{total_streams} streams ({(processed_streams/total_streams)*100:.1f}%)")
+
+ if include_stream:
+ included_groups.add(group_title)
+
+ tags = [
+ f'tvg-name="{stream_name}"',
+ f'group-title="{group_title}"',
+ ]
+
+ # Handle logo URL - proxy only if stream proxying is enabled
+ original_logo = stream.get("stream_icon", "")
+ if original_logo and not no_stream_proxy:
+ logo_url = f"{proxy_url}/image-proxy/{encode_url(original_logo)}"
+ else:
+ logo_url = original_logo
+ tags.append(f'tvg-logo="{logo_url}"')
+
+ # Handle channel id if enabled
+ if include_channel_id:
+ channel_id = stream.get("epg_channel_id")
+ if channel_id:
+ tags.append(f'{channel_id_tag}="{channel_id}"')
+
+ # Create the stream URL based on content type
+ if content_type == "live":
+ # Live TV streams
+ stream_url = f"{server_url}/live/{username}/{password}/{stream['stream_id']}.ts"
+ elif content_type == "vod":
+ # VOD streams
+ stream_url = f"{server_url}/movie/{username}/{password}/{stream['stream_id']}.{stream.get('container_extension', 'mp4')}"
+ elif content_type == "series":
+ # Series streams - check if we have episodes
+ episodes_data = series_episodes_map.get(stream.get("series_id"))
+
+ if episodes_data:
+ # Sort seasons numerically if possible
+ try:
+ sorted_seasons = sorted(episodes_data.items(), key=lambda x: int(x[0]) if str(x[0]).isdigit() else 999)
+ except:
+ sorted_seasons = episodes_data.items()
+
+ for season_num, episodes in sorted_seasons:
+ for episode in episodes:
+ episode_id = episode.get("id")
+ episode_num = episode.get("episode_num")
+ episode_title = episode.get("title")
+ container_ext = episode.get("container_extension", "mp4")
+
+ # Format title: Series Name - S01E01 - Episode Title
+ full_title = f"{stream_name} - S{str(season_num).zfill(2)}E{str(episode_num).zfill(2)} - {episode_title}"
+
+ # Build stream URL for episode
+ ep_stream_url = f"{server_url}/series/{username}/{password}/{episode_id}.{container_ext}"
+
+ # Apply stream proxying if enabled
+ if not no_stream_proxy:
+ ep_stream_url = f"{proxy_url}/stream-proxy/{encode_url(ep_stream_url)}"
+
+ # Add to playlist
+ m3u_playlist += (
+ f'#EXTINF:0 {" ".join(tags)},{full_title}\n'
+ )
+ m3u_playlist += f"{ep_stream_url}\n"
+
+ # Continue to next stream as we've added all episodes
+ continue
+ else:
+ # Fallback for series without episode data
+ series_id = stream.get("series_id", stream.get("stream_id", ""))
+ stream_url = f"{server_url}/series/{username}/{password}/{series_id}.mp4"
+
+ # Apply stream proxying if enabled (for non-series, or series fallback)
+ if not no_stream_proxy:
+ stream_url = f"{proxy_url}/stream-proxy/{encode_url(stream_url)}"
+
+ # Add stream to playlist
+ m3u_playlist += (
+ f'#EXTINF:0 {" ".join(tags)},{stream_name}\n'
+ )
+ m3u_playlist += f"{stream_url}\n"
+
+ # Log included groups after filtering
+ logger.info(f"Groups included after filtering: {sorted(included_groups)}")
+ logger.info(f"Groups excluded after filtering: {sorted(all_groups - included_groups)}")
+ logger.info(f"✅ M3U generation complete! Generated playlist with {len(included_groups)} groups")
+
+ return m3u_playlist
diff --git a/app/services/xtream_api.py b/app/services/xtream_api.py
new file mode 100644
index 0000000..a4abf11
--- /dev/null
+++ b/app/services/xtream_api.py
@@ -0,0 +1,281 @@
+"""Xtream Codes API client service"""
+import json
+import logging
+import time
+import urllib.parse
+from concurrent.futures import ThreadPoolExecutor, as_completed
+
+import requests
+from fake_useragent import UserAgent
+from flask import request
+
+logger = logging.getLogger(__name__)
+
+
+def fetch_api_data(url, timeout=10):
+ """Make a request to an API endpoint"""
+ ua = UserAgent()
+ headers = {
+ "User-Agent": ua.chrome,
+ "Accept": "application/json,text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
+ "Accept-Language": "en-US,en;q=0.5",
+ "Connection": "close",
+ "Accept-Encoding": "gzip, deflate",
+ }
+
+ try:
+ hostname = urllib.parse.urlparse(url).netloc.split(":")[0]
+ logger.debug(f"Making request to host: {hostname}")
+
+ # Use fresh connection for each request to avoid stale connection issues
+ response = requests.get(url, headers=headers, timeout=timeout, stream=True)
+ response.raise_for_status()
+
+ # For large responses, use streaming JSON parsing
+ try:
+ # Check content length to decide parsing strategy
+ content_length = response.headers.get('Content-Length')
+ if content_length and int(content_length) > 10_000_000: # > 10MB
+ logger.info(f"Large response detected ({content_length} bytes), using optimized parsing")
+
+ # Stream the JSON content for better memory efficiency
+ response.encoding = 'utf-8' # Ensure proper encoding
+ return response.json()
+ except json.JSONDecodeError:
+ # Fallback to text for non-JSON responses
+ return response.text
+
+ except requests.exceptions.SSLError:
+ return {"error": "SSL Error", "details": "Failed to verify SSL certificate"}, 503
+ except requests.exceptions.RequestException as e:
+ logger.error(f"RequestException: {e}")
+ return {"error": "Request Exception", "details": str(e)}, 503
+
+
+def validate_xtream_credentials(url, username, password):
+ """Validate the Xtream API credentials"""
+ api_url = f"{url}/player_api.php?username={username}&password={password}"
+ data = fetch_api_data(api_url)
+
+ if isinstance(data, tuple): # Error response
+ return None, data[0], data[1]
+
+ if "user_info" not in data or "server_info" not in data:
+ return (
+ None,
+ json.dumps(
+ {
+ "error": "Invalid Response",
+ "details": "Server response missing required data (user_info or server_info)",
+ }
+ ),
+ 400,
+ )
+
+ return data, None, None
+
+
+def fetch_api_endpoint(url_info):
+ """Fetch a single API endpoint - used for concurrent requests"""
+ url, name, timeout = url_info
+ try:
+ logger.info(f"🚀 Fetching {name}...")
+ start_time = time.time()
+ data = fetch_api_data(url, timeout=timeout)
+ end_time = time.time()
+
+ if isinstance(data, list):
+ logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s - got {len(data)} items")
+ else:
+ logger.info(f"✅ Completed {name} in {end_time-start_time:.1f}s")
+ return name, data
+ except Exception as e:
+ logger.warning(f"❌ Failed to fetch {name}: {e}")
+ return name, None
+
+
+def fetch_series_episodes(url, username, password, series_id):
+ """Fetch episodes for a specific series"""
+ api_url = f"{url}/player_api.php?username={username}&password={password}&action=get_series_info&series_id={series_id}"
+ start_time = time.time()
+ try:
+ # Use a shorter timeout for individual series as we might fetch many
+ data = fetch_api_data(api_url, timeout=20)
+
+ # Check if we got a valid response with episodes
+ # The API returns 'episodes' as a dict {season_num: [episodes]}
+ if isinstance(data, dict) and "episodes" in data and data["episodes"]:
+ logger.info(f"✅ Fetched episodes for series {series_id} in {time.time() - start_time:.1f}s")
+ return series_id, data["episodes"]
+ else:
+ logger.error(f"No episodes found for series {series_id}")
+ return series_id, None
+ except Exception as e:
+ logger.error(f"Failed to fetch episodes for series {series_id} in {time.time() - start_time:.1f}s: {e}")
+ return series_id, None
+
+
+def fetch_categories_and_channels(url, username, password, include_vod=False):
+ """Fetch categories and channels from the Xtream API using concurrent requests"""
+ all_categories = []
+ all_streams = []
+
+ try:
+ # Prepare all API endpoints to fetch concurrently
+ api_endpoints = [
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_live_categories",
+ "live_categories", 60),
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_live_streams",
+ "live_streams", 180),
+ ]
+
+ # Add VOD endpoints if requested (WARNING: This will be much slower!)
+ if include_vod:
+ logger.warning("⚠️ Including VOD content - this will take significantly longer!")
+ logger.info("💡 For faster loading, use the API without include_vod=true")
+
+ # Only add the most essential VOD endpoints - skip the massive streams for categories-only requests
+ api_endpoints.extend([
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_categories",
+ "vod_categories", 60),
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_series_categories",
+ "series_categories", 60),
+ ])
+
+ # Only fetch the massive stream lists if explicitly needed for M3U generation
+ vod_for_m3u = request.endpoint == 'api.generate_m3u'
+ if vod_for_m3u:
+ logger.warning("🐌 Fetching massive VOD/Series streams for M3U generation...")
+ api_endpoints.extend([
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_vod_streams",
+ "vod_streams", 240),
+ (f"{url}/player_api.php?username={username}&password={password}&action=get_series",
+ "series", 240),
+ ])
+ else:
+ logger.info("⚡ Skipping massive VOD streams for categories-only request")
+
+ # Fetch all endpoints concurrently using ThreadPoolExecutor
+ logger.info(f"Starting concurrent fetch of {len(api_endpoints)} API endpoints...")
+ results = {}
+
+ with ThreadPoolExecutor(max_workers=10) as executor: # Increased workers for better concurrency
+ # Submit all API calls
+ future_to_name = {executor.submit(fetch_api_endpoint, endpoint): endpoint[1]
+ for endpoint in api_endpoints}
+
+ # Collect results as they complete
+ for future in as_completed(future_to_name):
+ name, data = future.result()
+ results[name] = data
+
+ logger.info("All concurrent API calls completed!")
+
+ # Process live categories and streams (required)
+ live_categories = results.get("live_categories")
+ live_streams = results.get("live_streams")
+
+ if isinstance(live_categories, tuple): # Error response
+ return None, None, live_categories[0], live_categories[1]
+ if isinstance(live_streams, tuple): # Error response
+ return None, None, live_streams[0], live_streams[1]
+
+ if not isinstance(live_categories, list) or not isinstance(live_streams, list):
+ return (
+ None,
+ None,
+ json.dumps(
+ {
+ "error": "Invalid Data Format",
+ "details": "Live categories or streams data is not in the expected format",
+ }
+ ),
+ 500,
+ )
+
+ # Optimized data processing - batch operations for massive datasets
+ logger.info("Processing live content...")
+
+ # Batch set content_type for live content
+ if live_categories:
+ for category in live_categories:
+ category["content_type"] = "live"
+ all_categories.extend(live_categories)
+
+ if live_streams:
+ for stream in live_streams:
+ stream["content_type"] = "live"
+ all_streams.extend(live_streams)
+
+ logger.info(f"✅ Added {len(live_categories)} live categories and {len(live_streams)} live streams")
+
+ # Process VOD content if requested and available
+ if include_vod:
+ logger.info("Processing VOD content...")
+
+ # Process VOD categories
+ vod_categories = results.get("vod_categories")
+ if isinstance(vod_categories, list) and vod_categories:
+ for category in vod_categories:
+ category["content_type"] = "vod"
+ all_categories.extend(vod_categories)
+ logger.info(f"✅ Added {len(vod_categories)} VOD categories")
+
+ # Process series categories first (lightweight)
+ series_categories = results.get("series_categories")
+ if isinstance(series_categories, list) and series_categories:
+ for category in series_categories:
+ category["content_type"] = "series"
+ all_categories.extend(series_categories)
+ logger.info(f"✅ Added {len(series_categories)} series categories")
+
+ # Only process massive stream lists if they were actually fetched
+ vod_streams = results.get("vod_streams")
+ if isinstance(vod_streams, list) and vod_streams:
+ logger.info(f"🔥 Processing {len(vod_streams)} VOD streams (this is the slow part)...")
+
+ # Batch process for better performance
+ batch_size = 5000
+ for i in range(0, len(vod_streams), batch_size):
+ batch = vod_streams[i:i + batch_size]
+ for stream in batch:
+ stream["content_type"] = "vod"
+ if i + batch_size < len(vod_streams):
+ logger.info(f" Processed {i + batch_size}/{len(vod_streams)} VOD streams...")
+
+ all_streams.extend(vod_streams)
+ logger.info(f"✅ Added {len(vod_streams)} VOD streams")
+
+ # Process series (this can also be huge!)
+ series = results.get("series")
+ if isinstance(series, list) and series:
+ logger.info(f"🔥 Processing {len(series)} series (this is also slow)...")
+
+ # Batch process for better performance
+ batch_size = 5000
+ for i in range(0, len(series), batch_size):
+ batch = series[i:i + batch_size]
+ for show in batch:
+ show["content_type"] = "series"
+ if i + batch_size < len(series):
+ logger.info(f" Processed {i + batch_size}/{len(series)} series...")
+
+ all_streams.extend(series)
+ logger.info(f"✅ Added {len(series)} series")
+
+ except Exception as e:
+ logger.error(f"Critical error fetching API data: {e}")
+ return (
+ None,
+ None,
+ json.dumps(
+ {
+ "error": "API Fetch Error",
+ "details": f"Failed to fetch data from IPTV service: {str(e)}",
+ }
+ ),
+ 500,
+ )
+
+ logger.info(f"🚀 CONCURRENT FETCH COMPLETE: {len(all_categories)} total categories and {len(all_streams)} total streams")
+ return all_categories, all_streams, None, None
diff --git a/app/utils/__init__.py b/app/utils/__init__.py
new file mode 100644
index 0000000..09d47b4
--- /dev/null
+++ b/app/utils/__init__.py
@@ -0,0 +1,12 @@
+"""Utility functions package"""
+from .helpers import encode_url, group_matches, parse_group_list, setup_custom_dns
+from .streaming import generate_streaming_response, stream_request
+
+__all__ = [
+ 'setup_custom_dns',
+ 'encode_url',
+ 'parse_group_list',
+ 'group_matches',
+ 'stream_request',
+ 'generate_streaming_response'
+]
diff --git a/app/utils/helpers.py b/app/utils/helpers.py
new file mode 100644
index 0000000..f12de3e
--- /dev/null
+++ b/app/utils/helpers.py
@@ -0,0 +1,93 @@
+"""Utility functions for URL encoding, filtering, and DNS setup"""
+import fnmatch
+import ipaddress
+import logging
+import socket
+import urllib.parse
+
+import dns.resolver
+
+logger = logging.getLogger(__name__)
+
+
+def setup_custom_dns():
+ """Configure a custom DNS resolver using reliable DNS services"""
+ dns_servers = ["1.1.1.1", "1.0.0.1", "8.8.8.8", "8.8.4.4", "9.9.9.9"]
+
+ custom_resolver = dns.resolver.Resolver()
+ custom_resolver.nameservers = dns_servers
+
+ original_getaddrinfo = socket.getaddrinfo
+
+ def new_getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
+ if host:
+ try:
+ # Skip DNS resolution for IP addresses
+ try:
+ ipaddress.ip_address(host)
+ # If we get here, the host is already an IP address
+ logger.debug(f"Host is already an IP address: {host}, skipping DNS resolution")
+ except ValueError:
+ # Not an IP address, so try system DNS first
+ try:
+ result = original_getaddrinfo(host, port, family, type, proto, flags)
+ logger.debug(f"System DNS resolved {host}")
+ return result
+ except Exception as system_error:
+ logger.info(f"System DNS resolution failed for {host}: {system_error}, falling back to custom DNS")
+ # Fall back to custom DNS
+ answers = custom_resolver.resolve(host)
+ host = str(answers[0])
+ logger.debug(f"Custom DNS resolved {host}")
+ except Exception as e:
+ logger.info(f"Custom DNS resolution also failed for {host}: {e}, using original getaddrinfo")
+ return original_getaddrinfo(host, port, family, type, proto, flags)
+
+ socket.getaddrinfo = new_getaddrinfo
+ logger.info("Custom DNS resolver set up")
+
+
+def encode_url(url):
+ """Safely encode a URL for use in proxy endpoints"""
+ return urllib.parse.quote(url, safe="") if url else ""
+
+
+def parse_group_list(group_string):
+ """Parse a comma-separated string into a list of trimmed strings"""
+ return [group.strip() for group in group_string.split(",")] if group_string else []
+
+
+def group_matches(group_title, pattern):
+ """Check if a group title matches a pattern, supporting wildcards and exact matching"""
+ # Convert to lowercase for case-insensitive matching
+ group_lower = group_title.lower()
+ pattern_lower = pattern.lower()
+
+ # Handle spaces in pattern
+ if " " in pattern_lower:
+ # For patterns with spaces, split and check each part
+ pattern_parts = pattern_lower.split()
+ group_parts = group_lower.split()
+
+ # If pattern has more parts than group, can't match
+ if len(pattern_parts) > len(group_parts):
+ return False
+
+ # Check each part of the pattern against group parts
+ for i, part in enumerate(pattern_parts):
+ if i >= len(group_parts):
+ return False
+ if "*" in part or "?" in part:
+ if not fnmatch.fnmatch(group_parts[i], part):
+ return False
+ else:
+ if part not in group_parts[i]:
+ return False
+ return True
+
+ # Check for wildcard patterns
+ if "*" in pattern_lower or "?" in pattern_lower:
+ return fnmatch.fnmatch(group_lower, pattern_lower)
+ else:
+ # Simple substring match for non-wildcard patterns
+ return pattern_lower in group_lower
diff --git a/app/utils/streaming.py b/app/utils/streaming.py
new file mode 100644
index 0000000..ae971c5
--- /dev/null
+++ b/app/utils/streaming.py
@@ -0,0 +1,65 @@
+"""Streaming and proxy utilities"""
+import logging
+
+import requests
+from flask import Response
+
+logger = logging.getLogger(__name__)
+
+
+def stream_request(url, headers=None, timeout=30):
+ """Make a streaming request that doesn't buffer the full response"""
+ if not headers:
+ headers = {
+ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
+ "Connection": "keep-alive",
+ }
+
+ # Use longer timeout for streams and set both connect and read timeouts
+ return requests.get(url, stream=True, headers=headers, timeout=(10, timeout))
+
+
+def generate_streaming_response(response, content_type=None):
+ """Generate a streaming response with appropriate headers"""
+ if not content_type:
+ content_type = response.headers.get("Content-Type", "application/octet-stream")
+
+ def generate():
+ try:
+ bytes_sent = 0
+ for chunk in response.iter_content(chunk_size=8192):
+ if chunk:
+ bytes_sent += len(chunk)
+ yield chunk
+ logger.info(f"Stream completed, sent {bytes_sent} bytes")
+ except requests.exceptions.ChunkedEncodingError as e:
+ # Chunked encoding error from upstream - log and stop gracefully
+ logger.warning(f"Upstream chunked encoding error after {bytes_sent} bytes: {str(e)}")
+ # Don't raise - just stop yielding to close stream gracefully
+ except requests.exceptions.ConnectionError as e:
+ # Connection error (reset, timeout, etc.) - log and stop gracefully
+ logger.warning(f"Connection error after {bytes_sent} bytes: {str(e)}")
+ # Don't raise - just stop yielding to close stream gracefully
+ except Exception as e:
+ logger.error(f"Streaming error after {bytes_sent} bytes: {str(e)}")
+ # Don't raise exceptions in generators after headers are sent!
+ # Raising here causes Flask to inject "HTTP/1.1 500" into the chunked body,
+ finally:
+ # Always close the upstream response to free resources
+ try:
+ response.close()
+ except:
+ pass
+
+ headers = {
+ "Access-Control-Allow-Origin": "*",
+ "Content-Type": content_type,
+ }
+
+ # Add content length if available and not using chunked transfer
+ if "Content-Length" in response.headers and "Transfer-Encoding" not in response.headers:
+ headers["Content-Length"] = response.headers["Content-Length"]
+ else:
+ headers["Transfer-Encoding"] = "chunked"
+
+ return Response(generate(), mimetype=content_type, headers=headers, direct_passthrough=True)
diff --git a/frontend/index.html b/frontend/index.html
index a64fc1c..57e0fe6 100644
--- a/frontend/index.html
+++ b/frontend/index.html
@@ -4,168 +4,163 @@
- xtream2m3u - M3U Playlist Generator
+ xtream2m3u - Playlist Generator
+
+
-
+
+
xtream2m3u
-
Convert Xtream IPTV APIs into customizable M3U playlists
-
+
Generate custom M3U playlists from your Xtream IPTV subscription.
+
-
+
- 🔐 Xtream API Credentials
+ 🔐 Service Credentials
-
-
🔒
-
- Privacy Notice: Your credentials are only used to connect to your IPTV
- service and are never saved or stored on our servers.
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
-
Loading categories...
+
Connecting to service...
-
+
- 📁 Select Categories
+ 📁 Customize Playlist
-
-
-
+
+
+
+
+
+
+
+ 🔍
+
- Click categories to select them (or leave empty to include all)
+ Select categories to include in your playlist
+
+
+
+
-
+
-
-
+
-
+
-
-
+
+
✓
-
Playlist Generated!
-
Your M3U playlist has been successfully created and is ready for
- download.
+
Playlist Ready!
+
Your custom M3U playlist has been generated successfully.