4285 字
21 分钟
自定义分流规则脚本生成并托管

前言#

技术要点:Github+Python
项目地址

1.安装git(已安装可跳过)#

Git官网

1.1.配置github公钥私钥#

① 安装完git后输入,生成公钥私钥
邮箱要与后面git配置的邮箱一致

ssh-keygen -t rsa -C “邮箱”

② 查看公钥并配置到github中
公私钥生成路径: C:\Users\用户名\.ssh
id_rsa: 私钥
id_rsa.pub: 公钥
查看公钥 > 打开Github > Setting > SSH and GPG keys > New SSH key > 输入名称 > 输入公钥内容 > Add SSH key

③ 验证配置

ssh git@github.com

1.5.在github上创建仓库并将修改好的项目上传到github上#

第一次 上传到github:

git config --global user.name “用户名”
git config --global user.email “邮箱”
git init
git add .
git commit -m "第一次提交"
git remote add origin git@github.com:Ctory-Nily/rule-script.git
git push -u origin main

更新本地仓库后上传:

git add .
git commit -m "更新信息"
git push -u origin main

第一次 拉取最新的github仓库:

git clone git@github.com:Ctory-Nily/rule-script.git
npm install

更新拉取最新的github仓库内容:

git pull origin main

2.配置CICD#

① 生成 Personal Access Token
点击右上角头像 > 选择 Settings > Developer settings > Personal access tokens > Generate new token

勾选以下权限:
repo(完全控制仓库)
workflow(允许操作工作流)

点击 Generate token,复制生成的 Token

② 将 Token 添加到仓库的 Secrets
打开你的GitHub仓库 > 点击Settings > Secrets and variables > Actions > New repository secret

输入以下内容:
Name: PUSH_EVERYDAY(或其他你喜欢的名字)
Value: 粘贴刚才生成的 Personal Access Token

点击 Add secret

③ 设置 Workflow的写入权限 打开你的GitHub仓库 > 点击Settings > Actions > General > Workflow permissions > 选择 Read and write permissions

④ 在你的项目根目录下创建 .github/workflows 文件夹
新建 process-json-list-fetch-and-convert.yml (仅供参考)

name: Process JSON and List Fetch and Convert Files

on:
  # 每天 UTC 时间 20:00 触发(北京时间 04:00)
  schedule:
    - cron: '0 20 * * *'
  # 手动触发
  workflow_dispatch:
  push:
    paths:
  # 当 script 文件夹下的 json 文件有更新时触发
      - 'script/**/*.json'
  # 当 user_rule 文件夹下有更新时触发
      - 'user_rule/**'

env:
  TZ: Asia/Shanghai

jobs:
  # 1.处理 Json 文件
  process_json:
    runs-on: ubuntu-latest

    # 将 has_changes 作为作业输出
    outputs:
      has_changes: ${{ steps.check-changes.outputs.has_changes }} 

    steps:
      - name: Checkout code
        uses: actions/checkout@v3
        with:
          fetch-depth: 0  # 获取完整的历史记录

      - name: Check for changes in script directory
        id: check-changes
        run: |
          # 检查 script 文件夹下的 .json 文件是否有更改
          if git diff --quiet HEAD~1 HEAD -- script/*.json; then
            echo "No changes in script directory."
            echo "has_changes=false" >> $GITHUB_OUTPUT
          else
            echo "Changes detected in script directory."
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Install jq
        if: steps.check-changes.outputs.has_changes == 'true'
        run: sudo apt-get install -y jq

      - name: Process JSON files
        if: steps.check-changes.outputs.has_changes == 'true'
        run: |
          # 遍历 script/ 文件夹下的所有 .json 文件
          for file in script/*.json; do
            echo "Processing file: $file"

            # 使用 jq 修改 rules_urls 中的 URL
            jq '.[].rules_urls |= map(gsub("/refs/heads/"; "/"))' "$file" > "$file.tmp"
            mv "$file.tmp" "$file"

            echo "Updated file: $file"
          done

      - name: Commit and push changes
        if: steps.check-changes.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.PUSH_EVERYDAY }}
        run: |
          git config --local user.email "actions@github.com"
          git config --local user.name "GitHub Actions"
          git add .
          if git diff-index --quiet HEAD; then
            echo "没有可替换内容,跳过提交。"
          else
            git commit -m "Automatically remove refs/heads/ from URLs in JSON files"
            git pull origin main --rebase
            git remote set-url origin https://x-access-token:$GH_TOKEN@github.com/$GITHUB_REPOSITORY.git
            if git push origin main; then
              echo "推送成功。"
            else
              echo "推送失败,请检查远程分支是否有冲突。"
              exit 1
            fi
          fi

  # 2.处理 list 文件
  process_list:
    runs-on: ubuntu-latest

    # 将 has_changes 作为作业输出
    outputs:
      has_changes: ${{ steps.check-changes.outputs.has_changes }} 

    needs: process_json
    if: ${{ always() }}

    steps:
      - name: Checkout code
        uses: actions/checkout@v3
        with:
          fetch-depth: 0  # 获取完整的历史记录

      - name: Check for changes in user_rule directory
        id: check-changes
        run: |
          # 检查 user_rule 目录是否有更改
          if git diff --quiet HEAD~1 HEAD -- user_rule; then
            echo "No changes in user_rule directory."
            echo "has_changes=false" >> $GITHUB_OUTPUT
          else
            echo "Changes detected in user_rule directory."
            echo "has_changes=true" >> $GITHUB_OUTPUT
          fi

      - name: Set up Python
        if: steps.check-changes.outputs.has_changes == 'true'
        uses: actions/setup-python@v4
        with:
          python-version: '3.9'

      # 运行 process_rules.py 脚本
      - name: Run Process Rule Python script
        if: steps.check-changes.outputs.has_changes == 'true'
        run: |
          python3 script/process_rules.py

      - name: Commit processed rule changes
        if: steps.check-changes.outputs.has_changes == 'true'
        env:
          GH_TOKEN: ${{ secrets.PUSH_EVERYDAY }}
        run: |
          git config --local user.email "actions@github.com"
          git config --local user.name "GitHub Actions"
          git add .
          if git diff-index --quiet HEAD; then
            echo "没有更改文件内容,跳过提交。"
          else
            git commit -m "Processed rule files with total rules count and sorted rules"
            git pull origin main --rebase
            git remote set-url origin https://x-access-token:$GH_TOKEN@github.com/$GITHUB_REPOSITORY.git
            if git push origin main; then
              echo "推送成功。"
            else
              echo "推送失败,请检查远程分支是否有冲突。"
              exit 1
            fi
          fi

  # 3.抓取并转换文件
  fetch_and_convert:
    runs-on: ubuntu-latest

    needs: process_list
    if: ${{ always() }}

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.9'

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          python -m pip install pyyaml requests
      
      # 运行 fetch_and_convert.py 脚本
      - name: Run fetch and convert script
        run: |
          python script/fetch_and_convert.py

      # 提交推送
      - name: Commit and push changes
        env:
          GH_TOKEN: ${{ secrets.PUSH_EVERYDAY }}
        run: |
          git config --local user.email "actions@github.com"
          git config --local user.name "GitHub Actions"
          git add .
          if git diff-index --quiet HEAD; then
            echo "没有文件更改,跳过提交。"
          else
            git commit -m "Auto-fetched and converted files to YAML"
            git pull origin main --rebase
            git remote set-url origin https://x-access-token:$GH_TOKEN@github.com/$GITHUB_REPOSITORY.git
            if git push origin main; then
              echo "推送成功。"
            else
              echo "推送失败,请检查远程分支是否有冲突。"
              exit 1
            fi
          fi

⑤ 新建 script/process_rules.py (仅供参考)

import os
import logging
from typing import List, Dict, Optional
from datetime import datetime
from zoneinfo import ZoneInfo

# 配置日志记录
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s",
)

# 定义规则类型的排序优先级
RULE_ORDER = [
    "DOMAIN",
    "DOMAIN-SUFFIX",
    "DOMAIN-KEYWORD",
    "IP-CIDR",
    "IP-CIDR6",
    "IP-SUFFIX",
]

def get_time() -> Optional[str]:
    """
    获取工作流执行时的时间段
    :return: 当前时间
    """
    Beijing_Time = ZoneInfo('Asia/Shanghai')
    now_time = datetime.now(Beijing_Time)

    # 将时间格式化为 "年月日时分" 的格式
    format_time = now_time.strftime("%Y年%m月%d日 %H:%M")

    return format_time

def calculate_rule_number(content: List[str]) -> Dict[str, int]:
    """
    单独统计各个规则的数量
    :param content: 内容列表
    :return: 统计后的数据字典
    """
    rule_number_dict = { keyword: 0 for keyword in RULE_ORDER }

    # 遍历数据并统计
    for line in content:
        for keyword in RULE_ORDER:
            if line.startswith(keyword):
                rule_number_dict[keyword] += 1
    
    return rule_number_dict

def is_list_file(file_path: str) -> bool:
    """
    检查文件是否为 list 格式(每行是一个条目)。
    :param file_path: 文件路径
    :return: 如果是 list 格式返回 True,否则返回 False
    """
    try:
        with open(file_path, "r", encoding="utf-8") as file:
            lines = file.readlines()
            return len(lines) > 0 and all(len(line.strip().split()) == 1 for line in lines if line.strip() and not line.startswith('#'))
    except Exception as e:
        logging.error(f"检查文件格式失败: {file_path} - {e}")
        return False

def count_rule_lines(file_path: str, rulename: str) -> None:
    """
    统计规则文件中每种规则类型的数量,并生成排序后的规则文件。
    :param file_path: 文件路径
    :param rulename: 规则名称
    """
    try:
        logging.info(f"正在处理文件: {file_path} - 规则名称: {rulename}")

        with open(file_path, "r", encoding="utf-8") as file:
            original_content = file.read().strip()

        # 移除注释行
        lines = [line.strip() for line in original_content.splitlines() if line.strip() and not line.startswith("#")]

        if not lines:
            logging.info(f"文件 {file_path} 中没有有效内容。")
            return

        # 使用集合去重
        unique_rules = set()
        sorted_rules = {prefix: [] for prefix in RULE_ORDER}

        for line in lines:
            parts = line.split(",")
            if len(parts) < 2:
                logging.warning(f"忽略格式错误的行: {line}")
                continue

            parts = [p.strip() for p in parts]
            rule_type, rule_value = parts[:2]
            rule_option = parts[2] if len(parts) > 2 else None

            if rule_type in RULE_ORDER:
                if line not in unique_rules:
                    unique_rules.add(line)
                    sorted_rules[rule_type].append(line)
                else:
                    logging.debug(f"重复规则被忽略: {line}")

        # 生成排序后的规则列表
        sorted_lines = []
        for prefix in RULE_ORDER:
            sorted_rules[prefix].sort()
            sorted_lines.extend(sorted_rules[prefix])

        rule_number_dict = calculate_rule_number(sorted_lines)
        total_rules = len(sorted_lines)

        # 获取当前时间
        now_time = get_time()

        # 生成注释信息
        comment = f"# 规则名称: {rulename} \n"
        comment += f"# 规则总数量: {total_rules} \n"
        comment += f"# 更新时间: {now_time} \n"

        for prefix, count in rule_number_dict.items():
            comment += f"# {prefix}: {count} \n"
        sorted_lines.insert(0, comment)

        # 写回文件
        new_content = "\n".join(sorted_lines)
        with open(file_path, "w", encoding="utf-8") as file:
            file.write(new_content)

        # 检查文件内容是否更改
        if original_content == new_content:
            logging.info(f"文件内容未更改: {file_path}")
        else:
            logging.info(f"文件内容已更改: {file_path}")

    except Exception as e:
        logging.error(f"处理规则文件失败: {file_path} - {e}")

def process_rule_folder(folder_path: str) -> None:
    """
    处理指定文件夹中的所有规则文件。
    :param folder_path: 文件夹路径
    """
    try:
        for filename in os.listdir(folder_path):
            if filename.endswith('.list'):  # 只处理以 .list 结尾的文件
                file_path = os.path.join(folder_path, filename)
                if os.path.isfile(file_path) and is_list_file(file_path):  # 确保是文件
                    rulename = os.path.splitext(filename)[0]  # 去掉后缀获取规则名
                    count_rule_lines(file_path, rulename)
                    logging.info(f"处理文件成功: {file_path},规则名称: {rulename}")
    except Exception as e:
        logging.error(f"处理文件夹失败: {folder_path} - {e}")

if __name__ == "__main__":
    # 自定义 user_rule 文件夹路径
    folder_path = 'user_rule'
    os.makedirs(folder_path, exist_ok=True)

    process_rule_folder(folder_path)

⑥ 新建 script/rule_file_list.json (仅供参考)

[
  {
    "rule_name": "GoogleDrive",
    "rules_urls": [
      "https://raw.githubusercontent.com/blackmatrix7/ios_rule_script/master/rule/Clash/GoogleDrive/GoogleDrive.list"
    ],
    "cn_name": ""
  },
  {
    "rule_name": "GoogleSearch",
    "rules_urls": [
      "https://raw.githubusercontent.com/blackmatrix7/ios_rule_script/master/rule/Clash/GoogleSearch/GoogleSearch.list"
    ],
    "cn_name": "谷歌搜索"
  },
  {
    "rule_name": "Microsoft",
    "rules_urls": [
      "https://raw.githubusercontent.com/Repcz/Tool/X/Clash/Rules/Microsoft.list",
      "https://raw.githubusercontent.com/blackmatrix7/ios_rule_script/master/rule/Clash/Microsoft/Microsoft.list"
    ],
    "cn_name": "微软"
  },
]

⑦ 新建 script/fetch_and_convert.py (仅供参考)

import os
import requests
import logging
import json
from typing import List, Dict, Optional, Union
from datetime import datetime
from zoneinfo import ZoneInfo

# 配置日志记录
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s",
)

# 定义规则类型的排序优先级
RULE_ORDER = [
    "DOMAIN",
    "DOMAIN-SUFFIX",
    "DOMAIN-KEYWORD",
    "IP-CIDR",
    "IP-CIDR6",
    "IP-SUFFIX",
    "IP-ASN",
    "PROCESS-NAME",
    "AND"
]

def get_time() -> Optional[str]:
    """
    获取工作流执行时的时间段
    :return: 当前时间
    """
    Beijing_Time = ZoneInfo('Asia/Shanghai')
    now_time = datetime.now(Beijing_Time)

    # 将时间格式化为 "年月日时分" 的格式
    format_time = now_time.strftime("%Y年%m月%d日 %H:%M")

    return format_time

def calculate_rule_number(content: List[str]) -> Dict[str, int]:
    """
    单独统计各个规则的数量
    :param content: 内容列表
    :return: 统计后的数据字典
    """
    rule_number_dict = { prefix: 0 for prefix in RULE_ORDER }

    # 遍历数据并统计
    for line in content:
        for prefix in RULE_ORDER:
            if line.startswith(f"{prefix},"):
                rule_number_dict[prefix] += 1
    
    return rule_number_dict

def download_file(file_url: str, retries: int = 3) -> Optional[str]:
    """
    下载指定文件
    :param file_url: 文件的 URL
    :param retries: 重试次数
    :return: 文件内容(字符串),失败时返回 None
    """
    for attempt in range(retries):
        try:
            response = requests.get(file_url, timeout=10)
            response.raise_for_status()
            logging.info(f"文件下载成功: {file_url}")
            return response.text
        except requests.RequestException as e:
            logging.warning(f"文件下载失败(尝试 {attempt + 1}/{retries}): {file_url} - {e}")
            if attempt == retries - 1:
                logging.error(f"文件下载失败,已达到最大重试次数: {file_url}")
                return None

def merge_file_contents(file_contents: List[str]) -> List[str]:
    """
    合并文件内容,去除重复项和注释行
    :param file_contents: 文件内容列表
    :return: 合并后的文件内容列表
    """
    merged_lines = []
    seen_lines = set()

    for content in file_contents:
        if content:
            for line in content.splitlines():
                line = line.strip()
                if line and not line.startswith("#") and line not in seen_lines:  # 去除重复项和注释
                    seen_lines.add(line)
                    merged_lines.append(line)

    return merged_lines

def sort_rules(rules: List[str]) -> List[str]:
    """
    根据规则类型和字母顺序对规则进行排序
    :param rules: 规则列表
    :return: 排序后的规则列表
    """
    def rule_key(line: str) -> tuple:
        parts = line.split(",")
        if parts[0] in RULE_ORDER:
            return (RULE_ORDER.index(parts[0]), parts[1])  # 按规则类型和字母顺序排序
        else:
            return (len(RULE_ORDER), line)  # 异常类型

    return sorted(rules, key=rule_key)

def write_md_file(urls: List[str], rule_name: str, content: List[str], cn_name: str, folder_path: str) -> None:
    """
    在每个文件夹下生成 .md 说明文件
    :param urls: 文件的 URL 列表
    :param rule_name: 规则名称
    :param content: 内容列表
    :param cn_name: 中文名称
    :param folder_path: 生成路径
    """
    md_file_path = os.path.join(folder_path, f"README.md")

    # 规则总数
    rule_count = len(content)   

    # 获取到单独统计的各个规则的数量
    rule_number_dict = calculate_rule_number(content)

    # 获取当前时间
    now_time = get_time()

    # 创建 Markdown 文件内容
    md_content = f"""# {cn_name if cn_name else rule_name}

## 前言
![](https://img.shields.io/badge/%E4%B8%8B%E8%BD%BD%E8%A7%84%E5%88%99-%E5%90%88%E5%B9%B6%E8%A7%84%E5%88%99-blue) ![](https://img.shields.io/badge/%E7%BB%9F%E8%AE%A1%E6%95%B0%E9%87%8F-green) ![](https://img.shields.io/badge/%E7%94%9F%E6%88%90%E8%AE%A2%E9%98%85-8A2BE2)

本文件由脚本自动生成

## 规则统计
最后同步时间: {now_time}

各类型规则统计:
| 类型        | 数量(条) |
| ----------- | -------- |
"""

    for prefix, count in rule_number_dict.items():
        md_content += f"| {prefix:<12} | {count:<8} | \n"
    md_content += f"| **TOTAL** | **{rule_count}** | \n"

    md_content += f"""## Clash

### 订阅链接 (每日更新)
https://raw.githubusercontent.com/Ctory-Nily/rule-script/main/rules/Clash/{rule_name}/{rule_name}.yaml

### 使用说明
{rule_name}.yaml, 请使用 behavior: 'classical'

## 规则来源
"""

    for url in urls:
        md_content += f"- {url} \n"

    try:
        with open(md_file_path, "w", encoding="utf-8") as md_file:
            md_file.write(md_content)
            logging.info(f".md 文件保存成功: {md_file_path}")
    except IOError as e:
        logging.error(f".md 文件保存失败 {md_file_path} - {e}")

def write_list_file(rule_name: str, content: List[str], folder_path: str) -> None:
    """
    将内容写进 .list 文件,并添加标题注释
    :param rule_name: 规则名称
    :param content: 内容列表
    :param folder_path: 生成路径
    """
    list_file_path = os.path.join(folder_path, f"{rule_name}.list")

    # 规则总数
    rule_count = len(content)
    
    rule_number_dict = calculate_rule_number(content)

    # 添加标题注释
    formatted_content = [
        f"# 规则名称: {rule_name}",
        f"# 规则总数量: {rule_count}",
    ]

    for prefix, count in rule_number_dict.items():
        formatted_content.append(f"# {prefix}: {count}")

    # 合并数组
    formatted_content.append("")
    formatted_content.extend(content)

    try:
        with open(list_file_path, 'w', encoding='utf-8') as list_file:
            list_file.write("\n".join(formatted_content))
        logging.info(f".list 文件保存成功: {list_file_path}")
    except IOError as e:
        logging.error(f".list 文件保存失败 {list_file_path} - {e}")

def write_yaml_file(rule_name: str, content: List[str], folder_path: str) -> None:
    """
    将内容写入 yaml 文件,并添加 payload 格式
    :param rule_name: 规则名称
    :param content: 内容列表
    :param folder_path: 生成路径
    """
    yaml_file_path = os.path.join(folder_path, f"{rule_name}.yaml")

    # 规则总数
    rule_count = len(content)
    
    rule_number_dict = calculate_rule_number(content)

    # 添加标题注释
    formatted_content = [
        f"# 规则名称: {rule_name}",
        f"# 规则总数量: {rule_count}",
    ]

    for prefix, count in rule_number_dict.items():
        formatted_content.append(f"# {prefix}: {count}")

    # 合并数组
    formatted_content.append("")
    formatted_content.append("payload:")
    for line in content:
        formatted_content.append(f"  - {line}")

    try:
        with open(yaml_file_path, 'w', encoding='utf-8') as yaml_file:
            yaml_file.write("\n".join(formatted_content))
        logging.info(f".yaml 文件保存成功: {yaml_file_path}")
    except IOError as e:
        logging.error(f".yaml 文件保存失败 {yaml_file_path} - {e}")

def process_file(rule_name: str, urls: List[str], cn_name: str, folder_path: str) -> None:
    """
    下载文件、合并内容、排序规则并生成 .list 和 .yaml 文件。
    :param rule_name: 规则名称
    :param urls: 文件的 URL 列表
    :param cn_name: 中文名称
    :param folder_path: 生成路径
    """
    file_contents = []
    for url in urls:
        content = download_file(url)
        if content is not None:
            file_contents.append(content)
    
    # 合并内容
    merged_content = merge_file_contents(file_contents)

    # 排序规则
    sorted_content = sort_rules(merged_content)

    # 在 rules/Clash 目录下创建同名文件夹
    rule_folder_path = os.path.join(folder_path, rule_name)

    # 写入 .list 和 .yaml文件
    write_list_file(rule_name, sorted_content, rule_folder_path)
    write_yaml_file(rule_name, sorted_content, rule_folder_path)    

    # 写入 .md文件
    write_md_file(urls, rule_name, sorted_content, cn_name, rule_folder_path)

def write_total_md_file(folder_path: str, rule_list_data: List[Dict[str, Union[List[str], str]]], width = 5) -> None:
    """
    生成一个总的 .md文件
    :param folder_path: 生成路径
    :param rule_list_data: 列表数据
    :param width: 表格宽度
    """
    md_file_path = os.path.join(folder_path, f"README.md")

    # 获取当前时间
    now_time = get_time()

    total_list_data_number = len(rule_list_data)

    # 创建 Markdown 文件内容
    md_content = f"""## 前言
本文件由脚本自动生成

## 规则列表
处理的规则总计: {total_list_data_number} 

最后同步时间: {now_time} \n
"""
    rule_names = [f"{item['rule_name']},{item['cn_name']}" for item in rule_list_data]

    rows = []
    for i in range(0, len(rule_names), width):
        row = rule_names[i:i + width]
        rows.append(row)

    # 生成表格
    markdown_table = []
    markdown_table.append("| 规则名称 |" + " | ".join(["   "] * (width - 1) ) + " |")  # 表头
    markdown_table.append("|" + "----------|" * width)  # 分隔线

    for row in rows:
        formatted_row = []
        for cell in row:
            # 解析 cell,格式为 "rule_name,cn_name"
            try:
                rule_name, cn_name = cell.split(",", 1)
            except ValueError:
                rule_name = cell.split(",", 1)[0]
                cn_name = False  # 或者其他默认值

            # 如果 cn_name 有值,则使用 cn_name;否则使用 rule_name
            display_name = cn_name if cn_name else rule_name
            # 格式化单元格内容
            formatted_cell = f"[{display_name}](https://github.com/Ctory-Nily/rule-script/tree/main/rules/Clash/{rule_name})"
            formatted_row.append(formatted_cell)
        markdown_table.append("| " + " | ".join(formatted_row) + " |")  # 使用字符串列表

    md_content += "\n".join(markdown_table)

    try:
        with open(md_file_path, "w", encoding="utf-8") as md_file:
            md_file.write(md_content)
            logging.info(f".md 文件保存成功: {md_file_path}")
    except IOError as e:
        logging.error(f".md 文件保存失败 {md_file_path} - {e}")

if __name__ == "__main__":

    # 自定义 rule 文件夹总路径
    folder_path = 'rules/Clash/'
    os.makedirs(folder_path, exist_ok=True)

    # 获取 rule_file_list.json 的路径
    json_file_path = os.path.join(os.path.dirname(__file__), 'rule_file_list.json')

    # 读取 rule_file_list.json 文件
    try:
        with open(json_file_path, "r", encoding="utf-8") as json_file:
            rule_list_data = json.load(json_file)
    except FileNotFoundError:
        logging.error(f"文件未找到: {json_file_path}")
        exit(1)
    except json.JSONDecodeError:
        logging.error(f"JSON 文件格式错误: {json_file_path}")
        exit(1)

    # 批量处理
    for item in rule_list_data:
        process_file(item["rule_name"], item["rules_urls"], item["cn_name"], folder_path)
    
    # 生成总的 .md文件
    write_total_md_file(folder_path, rule_list_data)

⑧ 重新推送到仓库,此时在Actions选项卡中就可以看到 Daily Auto Push 工作流

git add .
git commit -m "加载工作流"
git push -u origin main

⑨ 工作流每日提交之后,本地文件更新前,需要先把仓库同步到本地

git pull origin main

2.5.配置清空Action工作流(可选)#

在你的项目根目录下创建 .github/workflows 文件夹

① 新建 clear-commit.yml (仅供参考)

name: Clear Commits

on:
  workflow_dispatch: # 允许手动触发

jobs:
  clear-commits:
    runs-on: ubuntu-latest
    permissions:
      contents: write
      actions: write

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v4
        with:
          ref: main
          fetch-depth: 0

      - name: Configure Git
        run: |
          git config --global user.name "GitHub Action"
          git config --global user.email "action@github.com"
          git config --global advice.detachedHead false

      - name: Rewrite Git History
        run: |
          git checkout --orphan temp-branch
          git add -A
          git commit -m "Initial commit after history clear"
          git branch -D main || true
          git branch -m main
          git push -f origin main

      - name: Cleanup Workflow Runs
        run: |
          # 使用 GitHub API 删除工作流记录
          echo "Cleaning up workflow runs..."
          RUNS_URL="https://api.github.com/repos/${{ github.repository }}/actions/runs"
          RUNS_RESPONSE=$(curl -s -H "Authorization: Bearer ${{ secrets.PUSH_EVERYDAY }}" "$RUNS_URL")
          RUN_IDS=$(echo "$RUNS_RESPONSE" | jq -r '.workflow_runs[].id')

          for RUN_ID in $RUN_IDS; do
            echo "Deleting run $RUN_ID..."
            curl -s -X DELETE -H "Authorization: Bearer ${{ secrets.PUSH_EVERYDAY }}" "$RUNS_URL/$RUN_ID"
          done
          echo "Workflow runs cleanup completed."

      - name: Post-Cleanup Check
        run: echo "History cleared successfully at $(date)"

② 清除工作流之后会把github项目内的所有文件提交到最新分支, 此时需要将本地的项目合并到新的分支

git pull --rebase

③ 如果对本地的 userDirect.list、userProxy.list、userReject.list 进行了更新
此时将这三个文件push到github上, 会自动执行 Fetch and Convert Files 工作流, 将这三个文件转换成 新标注释的list文件和yaml文件
转换完成后, 务必在本地执行 git pull origin main, 将转换好的文件同步到本地, 以防之后更新list文件后冲突报错

3.本地自动更新脚本 “提交更新.bat”#

无论本地文件先修改还是后修改都可以同步上传上去,注意bat文件格式要为ANSI

@echo off
echo 正在同步github仓库
git pull origin main

echo 正在添加所有文件到暂存区
git add .

:: 格式化日期和时间
for /f "tokens=1-3 delims=/- " %%a in ("%date%") do (
    set year=%%a
    set month=%%b
    set day=%%c
)
for /f "tokens=1-3 delims=:.," %%a in ("%time%") do (
    set hour=%%a
    set minute=%%b
)

:: 去掉小时前面的空格(如果小时是单数)
set hour=%hour: =%

:: 组合成 "年 月 日 时 分" 格式
set formatted_time=%year%年%month%月%day%日%hour%时%minute%分

:: 询问是否自定义提交信息
:input_confirm
set /p confirm=是否自定义提交信息?(输入 y 或 n): 
if "%confirm%"=="" (
    echo 输入不能为空,请重新输入!
    goto input_confirm
)

if "%confirm%"=="y" (
    :input_msg
    set /p commit_msg=请输入提交信息: 
    if "%commit_msg%"=="" (
        echo 输入不能为空,请重新输入!
        goto input_msg
    )
    git commit -m "%commit_msg%"
) else (
    git commit -m "%formatted_time%"
)

echo 正在推送到远程仓库
git push -u origin main

echo 操作完成!
pause

4.规则文件相关#

① 如果 surgio 中 template 里的 rule-providers 中使用的是 别人的 url:

  • 1.规则链接只能有一个
  • 2.如果仓库失效就无了
  • 3.更改链接 需要重新提交 surgio 并且重新生成规则文件

② 如果 surgio 中 template 里的 rule-providers 中使用的是 自己的 url:

  • 1.可以合并多条规则链接
  • 2.自己的仓库自己管理
  • 3.更新链接 只需要提交 rule-script 就行
  • 4.surgio 中 template 里的 rule-providers 中的链接保持不变
自定义分流规则脚本生成并托管
https://fuwari.vercel.app/posts/部署教程/rulescript/自定义分流规则脚本生成并托管/
作者
Ctory-Nily
发布于
2025-02-20
许可协议
CC BY-NC-SA 4.0