本文记录 Debian
系统常见配置、Docker
安装及 Docker
相关配置、远程 NFS
挂载等流程细节,以供后续参考。
Debian 版本
- Debian 12
- Debian 11
切换到 root 用户
文中所涉及的命令皆需在 root 用户权限下运行。
su - root
关闭 swap 分区
查看磁盘分区:
fdisk -l
# Device Start End Sectors Size Type
# /dev/sda1 2048 1050623 1048576 512M EFI System
# /dev/sda2 1050624 39942143 38891520 18.5G Linux filesystem
# /dev/sda3 39942144 41940991 1998848 976M Linux swap
如果提示 bash: fdisk: command not found
,需要切换到 root 用户
、 root 家目录
,如下:
su - root
可以看到当前 swap
分区是 /dev/sda3
。
卸载 swap
分区:
swapoff -v /dev/sda3
编辑 /etc/fstab
:
vi /etc/fstab
删除 swap 分区
自动挂载配置(应系统而异)):
# swap was on /dev/sda3 during installation
UUID=723f7399-e196-4981-bc69-1b5604a0451c none swap sw 0 0
删除 swap
分区:
fdisk /dev/sda
输入 d
表示删除分区,然后输入要删除的分区号(提前通过 fdisk -l 查看是第几个分区),会提示分区已被删除,输入 w
写入分区表。
再次查看分区:
fdisk -l
# Device Start End Sectors Size Type
# /dev/sda1 2048 1050623 1048576 512M EFI System
# /dev/sda2 1050624 39942143 38891520 18.5G Linux filesystem
发现 swap 分区已经被删除。
Debian 磁盘扩容(可选)
由于 Debian 是虚拟化安装,所以先要扩充虚拟机磁盘。
调整虚拟机磁盘大小
关闭虚拟机
调整虚拟机硬盘大小
启动虚拟机
查看虚拟机分区
fdisk -l
Debian 磁盘分区调整
查看虚拟机分区情况:
fdisk -l # Disk /dev/sda: 40 GiB, 42949672960 bytes, 83886080 sectors # Device Start End Sectors Size Type # /dev/sda1 2048 1050623 1048576 512M EFI System # /dev/sda2 1050624 39942143 38891520 18.5G Linux filesystem
安装
growpart
扩容工具apt update apt install -y cloud-guest-utils
扩容
growpart /dev/sda 2 # CHANGED: partition=2 start=1050624 old: size=38891520 end=39942144 new: size=82835423 end=83886047
/dev/sda
是要扩容的磁盘,2
是要扩容的第几个分区查看是否扩容成功
fdisk -l # Device Start End Sectors Size Type # /dev/sda1 2048 1050623 1048576 512M EFI System # /dev/sda2 1050624 83886046 82835423 39.5G Linux filesystem
调整文件系统的空间大小
resize2fs /dev/sda2
删除扩容工具
apt remove -y cloud-guest-utils
重启系统
systemctl reboot
修改 vim 配置
由于不适应 Debain 11
系统默认的 vim
操作,这里修改下 vim
配置。
vi /etc/vim/vimrc.tiny
改动配置如下:
set nocompatible
set backspace=2
保存并退出即可。
切换软件镜像源
cp /etc/apt/sources.list /etc/apt/sources_bak.list
vi /etc/apt/sources.list
修改 /etc/apt/sources.list
内容如下:
debian 11:
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-updates main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-updates main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-backports main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-backports main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian-security bullseye-security main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian-security bullseye-security main contrib non-free
debian 12:
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm main contrib non-free non-free-firmware
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm-updates main contrib non-free non-free-firmware
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm-updates main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm-backports main contrib non-free non-free-firmware
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bookworm-backports main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian-security bookworm-security main contrib non-free non-free-firmware
# # deb-src https://mirrors.tuna.tsinghua.edu.cn/debian-security bookworm-security main contrib non-free non-free-firmware
# deb https://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
# deb-src https://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
更新软件包
apt update # 更新源
apt upgrade # 更新安装包
apt dist-upgrade # 更新安装包,更新依赖关系
apt autoremove && apt clean && apt autoclean # 清理下载文件的存档
修改时区
查看当前时区信息
timedatectl # Local time: Wed 2023-02-01 15:04:15 HKT # Universal time: Wed 2023-02-01 07:04:15 UTC # RTC time: Wed 2023-02-01 07:04:15 # Time zone: Asia/Hong_Kong (HKT, +0800) # System clock synchronized: yes # NTP service: active # RTC in local TZ: no
查看时区列表
timedatectl list-timezones
修改时区
timedatectl set-timezone Asia/Shanghai
也可以通过
tzselect
向导设置时区
修改 NTPsec 配置
系统默认使用
NTPsec
进行时间同步
vi /etc/ntpsec/ntp.conf
配置文件改动部分内容如下:
# You do need to talk to an NTP server or two (or three).
server ntp.ntsc.ac.cn prefer
server ntp1.aliyun.com
server ntp.tuna.tsinghua.edu.cn
# pool.ntp.org maps to about 1000 low-stratum NTP servers. Your server will
# pick a different set every time it starts up. Please consider joining the
# pool: <http://www.pool.ntp.org/join.html>
# pool 0.debian.pool.ntp.org iburst
# pool 1.debian.pool.ntp.org iburst
# pool 2.debian.pool.ntp.org iburst
# pool 3.debian.pool.ntp.org iburst
重启 NTPsec
服务
systemctl restart ntpsec
开启 IPv4 & IPv6 转发
vi /etc/sysctl.conf
修改内容如下:
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
net.ipv6.conf.all.forwarding=1
重启生效
/etc/init.d/procps restart
设置静态 IP
查看本机网络接口
ip a
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
# link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
# inet 127.0.0.1/8 scope host lo
# valid_lft forever preferred_lft forever
# inet6 ::1/128 scope host
# valid_lft forever preferred_lft forever
# 2: ens32: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
# link/ether 00:0c:29:e7:c2:4c brd ff:ff:ff:ff:ff:ff
# altname enp2s0
# inet 10.0.0.2/24 brd 10.0.0.255 scope global ens32
# valid_lft forever preferred_lft forever
# inet6 fe80::20c:29ff:fee7:c24c/64 scope link
# valid_lft forever preferred_lft forever
# 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
# link/ether 02:42:da:0c:0d:12 brd ff:ff:ff:ff:ff:ff
# inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
# valid_lft forever preferred_lft forever
编辑 /etc/network/interfaces
vi /etc/network/interfaces
修改前内容:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug ens32
iface ens32 inet dhcp
# This is an autoconfigured IPv6 interface
iface ens32 inet6 auto
修改后如下:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto ens32
iface ens32 inet static
address 10.0.0.2
netmask 255.255.255.0
broadcast 10.0.0.255
gateway 10.0.0.1
# This is an autoconfigured IPv6 interface
iface ens32 inet6 auto
重启网络
systemctl restart networking
查看网络信息
ip a
安装 Samba
安装
SMB
apt update apt install -y samba
修改
SMB
配置文件/etc/samba/smb.conf
:cp /etc/samba/smb.conf /etc/samba/smb.conf.bak vi /etc/samba/smb.conf
修改内容如下:
# # Sample configuration file for the Samba suite for Debian GNU/Linux. # # # This is the main Samba configuration file. You should read the # smb.conf(5) manual page in order to understand the options listed # here. Samba has a huge number of configurable options most of which # are not shown in this example # # Some options that are often worth tuning have been included as # commented-out examples in this file. # - When such options are commented with ";", the proposed setting # differs from the default Samba behaviour # - When commented with "#", the proposed setting is the default # behaviour of Samba but the option is considered important # enough to be mentioned here # # NOTE: Whenever you modify this file you should run the command # "testparm" to check that you have not made any basic syntactic # errors. #======================= Global Settings ======================= [global] ## Browsing/Identification ### # Change this to the workgroup/NT-domain name your Samba server will part of workgroup = WORKGROUP security = user #### Networking #### # The specific set of interfaces / networks to bind to # This can be either the interface name or an IP address/netmask; # interface names are normally preferred ; interfaces = 127.0.0.0/8 ens33 # Only bind to the named interfaces and/or networks; you must use the # 'interfaces' option above to use this. # It is recommended that you enable this feature if your Samba machine is # not protected by a firewall or is a firewall itself. However, this # option cannot handle dynamic or non-broadcast interfaces correctly. ; bind interfaces only = yes #### Debugging/Accounting #### # This tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Cap the size of the individual log files (in KiB). max log size = 1000 # We want Samba to only log to /var/log/samba/log.{smbd,nmbd}. # Append syslog@1 if you want important messages to be sent to syslog too. logging = file # Do something sensible when Samba crashes: mail the admin a backtrace panic action = /usr/share/samba/panic-action %d ####### Authentication ####### # Server role. Defines in which mode Samba will operate. Possible # values are "standalone server", "member server", "classic primary # domain controller", "classic backup domain controller", "active # directory domain controller". # # Most people will want "standalone server" or "member server". # Running as "active directory domain controller" will require first # running "samba-tool domain provision" to wipe databases and create a # new domain. server role = standalone server obey pam restrictions = yes # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. unix password sync = yes # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user # netbios name netbios name = Debian # macos adapt # vfs objects = catia fruit streams_xattr # fruit:aapl = yes # fruit:metadata = stream # fruit:model = MacPro # fruit:time machine = yes # fruit:posix_rename = yes # fruit:veto_appledouble = no # fruit:wipe_intentionally_left_blank_rfork = yes # fruit:delete_empty_adfiles = yes # ea support = yes ########## Domains ########### # # The following settings only takes effect if 'server role = classic # primary domain controller', 'server role = classic backup domain controller' # or 'domain logons' is set # # It specifies the location of the user's # profile directory from the client point of view) The following # required a [profiles] share to be setup on the samba server (see # below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap config * : backend = tdb ; idmap config * : range = 3000-7999 ; idmap config YOURDOMAINHERE : backend = tdb ; idmap config YOURDOMAINHERE : range = 100000-999999 ; template shell = /bin/bash # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 means that usershare is disabled. # usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = no #======================= Share Definitions ======================= # [homes] # comment = Home Directories # browseable = no # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. # read only = yes # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. # create mask = 0700 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. # directory mask = 0700 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. # The following parameter makes sure that only "username" can connect # to \\server\username # This might need tweaking when using external authentication schemes # valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) # ;[netlogon] # ; comment = Network Logon Service # ; path = /home/samba/netlogon # ; guest ok = yes # ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on # ;[profiles] # ; comment = Users profiles # ; path = /home/samba/profiles # ; guest ok = no # ; browseable = no # ; create mask = 0600 # ; directory mask = 0700 # [printers] # comment = All Printers # browseable = no # path = /var/spool/samba # printable = yes # guest ok = no # read only = yes # create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers # [print$] # comment = Printer Drivers # path = /var/lib/samba/printers # browseable = yes # read only = yes # guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it # ; write list = root, @lpadmin [volume1] comment = samba share folder path = /volume1 available = yes read only = no browseable = yes guest ok = no admin users = messica valid users = messica create mask = 0777 directory mask = 0777
启动/重启
SMB
服务systemctl start smbd systemctl restart smbd
服务开机启动
systemctl enable smbd
添加
SMB
用户smbpasswd -a <UserName>
然后输入新增的用户密码,二次确认后即可添加成功。
<UserName>
需要是系统用户启用局域网发现
安装 avahi
apt install avahi-daemon
修改网络邻居中展示的名字
开机启动
systemctl enable avahi-daemon
安装 Docker
Install Docker Engine on Debian
Uninstall old versions
apt remove docker docker-engine docker.io containerd runc
Install using the repository
Set up the repository
Update the apt package index and install packages to allow apt to use a repository over HTTPS:
apt update apt install ca-certificates curl gnupg
Add Docker’s official GPG key:
install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg
Use the following command to set up the repository:
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine
Update the apt package index:
apt update
Install Docker Engine, containerd.
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify that the Docker Engine installation is successful by running the
hello-world
image:docker run hello-world
Auto start on system boot up.
systemctl enable docker
Docker 使用阿里云镜像加速器
配置阿里云镜像加速器后,存在拉取的
latest
镜像可能不是最新的问题,不建议配置此项。
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
配置 Docker 默认日志驱动
可以通过配置(新增/修改) /etc/docker/daemon.json
文件来告诉 Docker 守护进程使用指定的日志驱动:
{
"log-driver": "json-file"
}
通过 log-opts
选项来配置日志驱动:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
max-size
指定单个日志文件大小为100M
,max-file
表示最多允许创建 3 个日志文件。
重启 Docker 守护进程来生效配置:
systemctl daemon-reload
systemctl restart docker
配置只对新创建的容器生效。现有容器不会自动使用新的日志配置。
创建 Docker Macvlan 网络
启用网卡混杂模式
使用命令行启用网卡混杂模式(设备重启失效)
ip link set ens32 promisc on # or # ifconfig ens32 promisc
ens32
是主机网卡通过配置文件开启混杂模式(重启不会失效)
修改
/etc/network/interfaces
vi /etc/network/interfaces
新增内容如下:
up ip link set ens32 promisc on
修改后的内容如下:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto ens32 iface ens32 inet static address 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255 gateway 10.0.0.1 up ip link set ens32 promisc on # This is an autoconfigured IPv6 interface iface ens32 inet6 auto
开启虚拟机虚拟网卡混杂模式
如果底层是虚拟机,需要开启虚拟交换机混杂模式,否则使用 macvlan 的容器无法访问。
以下以 Esxi 虚拟机为例:
设置虚拟交换机混杂模式
设置虚拟网卡混混杂模式
创建
docker macvlan
网络docker network create -d macvlan --subnet=10.0.0.0/24 --gateway=10.0.0.1 --subnet=fc00:1::/64 --gateway=fc00:1::1 --ipv6 -o parent=ens32 macvlan
部署 Portainer
容器
Portainer
可以用来方便的管理 Docker
容器。这里以 host
网络模式部署 Portainer
容器。
mkdir -p /volume1/docker/portainer/data
docker pull portainer/portainer-ce:latest
docker run -d --net=host --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /volume1/docker/portainer/data:/data portainer/portainer-ce:latest
部署 Mosdns
容器
Mosdns
用来分流 DNS
。
Mosdns
配置文件
点击查看配置详情。
Mosdns V5
相关配置
目录结构
. ├── bin │ └── updator.sh ├── config.yaml ├── custom_rules │ ├── block.txt │ ├── ddns.txt │ ├── direct.txt │ ├── host.txt │ ├── proxy.txt │ └── redirect.txt └── online_rules ├── apple_cn.txt ├── direct_list.txt ├── gfw.txt ├── google_cn.txt ├── greatfire.txt └── reject_list.txt
mosdns
配置文件config.yaml
:log: level: info # 日志级别。可选 "debug" "info" "warn" "error"。默认 "info"。 file: '' # 记录日志到文件。默认输出至 stderr。 production: true # 纯 json 输出。 # API 入口设置 api: http: ':8080' # 在该地址启动 api 接口。 # []string, 从其他配置文件载入 plugins 插件设置。 # include 的插件会比本配置文件中的插件先初始化。 include: [] plugins: - tag: ddns_domain type: domain_set args: # exps: # 域名表达式。 # - "qq.com" files: - ./custom_rules/ddns.txt - tag: direct_domain type: domain_set args: # exps: # 域名表达式。 # - "qq.com" files: - ./custom_rules/direct.txt - tag: proxy_domain type: domain_set args: files: - ./custom_rules/proxy.txt - tag: block_domain type: domain_set args: files: - ./custom_rules/block.txt - tag: host_domain type: hosts args: # entries: # []string # - "google.com 108.177.122.113" # 记录 files: - ./custom_rules/host.txt - tag: redirect_domain type: redirect args: # rules: # # 格式 [域名匹配规则] [重定向至域名] # - a.com b.com files: - ./custom_rules/redirect.txt - tag: cn_domain type: domain_set args: files: - ./online_rules/direct_list.txt - ./online_rules/apple_cn.txt - ./online_rules/google_cn.txt - tag: not_cn_domain type: domain_set args: files: - ./online_rules/proxy_list.txt - ./online_rules/gfw.txt - ./online_rules/greatfire.txt - tag: ads_domain type: domain_set args: files: - ./online_rules/reject_list.txt - tag: forward_local type: forward args: concurrent: 2 upstreams: - tag: ali # addr: https://dns.alidns.com/dns-query addr: https://223.5.5.5/dns-query # dial_addr: 223.5.5.5 - tag: DNSPod # addr: https://doh.pub/dns-query addr: https://120.53.53.53/dns-query # dial_addr: 120.53.53.53 - tag: forward_remote type: forward args: concurrent: 2 upstreams: - tag: google # addr: https://dns.google/dns-query addr: https://8.8.8.8/dns-query # dial_addr: 8.8.8.8 - tag: cloudflare # addr: https://cloudflare-dns.com/dns-query addr: https://1.1.1.1/dns-query # dial_addr: 1.1.1.1 - tag: fallback_query type: fallback args: primary: forward_local # 主可执行插件的 tag secondary: forward_remote # 副可执行插件的 tag threshold: 500 # 无响应回滚阈值。单位毫秒。默认 500 。 always_standby: true # 副可执行插件始终待命。 - tag: mem_cache type: cache args: size: 4096 # 内置内存缓存大小。单位: 条。默认: 1024。每个 cache 插件的内存缓存是独立的。 # lazy cache 设定。lazy_cache_ttl > 0 会启用 lazy cache。 # 所有应答都会在缓存中存留 lazy_cache_ttl 秒,但自身的 TTL 仍然有效。如果命中过期的应答, # 则缓存会立即返回 TTL 为 5 的应答,然后自动在后台发送请求更新数据。 # 相比强行增加应答自身的 TTL 的方法,lazy cache 能提高命中率,同时还能保持一定的数据新鲜度。 lazy_cache_ttl: 0 # lazy cache 生存时间。单位: 秒。默认: 0 (禁用 lazy cache)。 # 建议值 86400(1天)~ 259200(3天) - tag: main_sequence type: sequence args: - matches: - qtype 12 # PTR exec: reject 0 # DNS-Rcode # - exec: prefer_ipv4 - matches: - '!qname $ddns_domain' exec: $mem_cache - matches: - has_resp exec: accept - exec: $redirect_domain - matches: - has_resp exec: accept - exec: $host_domain - matches: - has_resp exec: accept - matches: - qname $direct_domain exec: $forward_local - matches: - has_resp exec: accept - matches: - qname $proxy_domain exec: $forward_remote - matches: - has_resp exec: accept - matches: - qname $block_domain $ads_domain exec: reject 3 # 执行 直接返回 NXDOMAIN(3) 屏蔽。 - matches: - qname $cn_domain exec: $forward_local - matches: - has_resp exec: accept - matches: - qname $not_cn_domain exec: $forward_remote - matches: - has_resp exec: accept # 其他未命中域名处理逻辑,根据返回的IP位置转发到相应的DNS上游进行解析 - exec: $fallback_query - matches: - '!has_resp' exec: drop_resp # sequence 结束(被 reject,accept,或者运行到末尾)后就会返回调用者。在这个配置文件中 # 调用者是下面的服务器。服务器就会返回应答至客户端。 # 启动 udp 和 tcp 服务器。 - tag: udp_server type: udp_server args: entry: main_sequence # 收到的请求会执行上面的逻辑 listen: ':53' - tag: tcp_server type: tcp_server args: entry: main_sequence listen: ':53'
更新脚本
#!/bin/bash mosdnsDir="/volume1/docker/mosdns" # create tmp directory mkdir -p ${mosdnsDir}/tmp echo "########################### start download files ###########################" # download related files to tmp directory fileList="direct-list proxy-list reject-list apple-cn google-cn gfw greatfire" for filename in $fileList; do newFilename=(${filename//-/_}) echo -e "\n downloading https://raw.githubusercontent.com/Loyalsoldier/v2ray-rules-dat/release/${newFilename}.txt ... \n" curl -C - --retry 10 https://raw.githubusercontent.com/Loyalsoldier/v2ray-rules-dat/release/${filename}.txt >> ${mosdnsDir}/tmp/${newFilename}.txt done echo "########################### all files download successfully! ###########################" # move all tmp directory files to rules directory mv -f ${mosdnsDir}/tmp/* ${mosdnsDir}/online_rules # force delete tmp directory rm -rf ${mosdnsDir}/tmp # 重启 mosdns docker restart mosdns echo "mosdns restarted"
Mosdns V4
相关配置
目录结构
. ├── bin │ └── updator.sh ├── config.yaml ├── geoip.dat ├── geosite.dat └── rules ├── block.txt ├── direct.txt ├── host.txt ├── proxy.txt └── redirect.txt
mosdns
配置文件log: level: info file: '' # 从其他配置文件载入 include,数据源,插件和服务器设置 # include 的设置会比本配置文件中的设置先被初始化 include: [] # 数据源设置 data_providers: - tag: geoIP file: './geoip.dat' auto_reload: true - tag: geoSite file: './geosite.dat' auto_reload: true - tag: proxy file: './rules/proxy.txt' auto_reload: true - tag: direct file: './rules/direct.txt' auto_reload: true - tag: host file: './rules/host.txt' auto_reload: true - tag: block file: './rules/block.txt' auto_reload: true - tag: redirect file: './rules/redirect.txt' auto_reload: true plugins: ################ 匹配器插件 ################# - tag: queryIsDirectDomain type: query_matcher args: domain: - 'provider:direct' - tag: queryIsProxyDomain type: query_matcher args: domain: - 'provider:proxy' - tag: queryIsBlockDomain type: query_matcher args: domain: - 'provider:block' # 匹配 hosts - tag: queryIsHostDomain type: hosts args: hosts: - 'provider:host' # 匹配 redirect domain - tag: queryIsRedirectDomain type: redirect args: rule: - 'provider:redirect' # 匹配本地域名的插件 - tag: queryIsAppleCNDomain type: query_matcher args: domain: - 'provider:geoSite:apple-cn' # 匹配本地域名的插件 - tag: queryIsCNDomain type: query_matcher args: domain: - 'provider:geoSite:cn' # 匹配非本地域名的插件 - tag: queryIsNotCNDomain type: query_matcher args: domain: - 'provider:geoSite:geolocation-!cn' # 匹配广告域名的插件 - tag: queryIsADDomain type: query_matcher args: domain: - 'provider:geoSite:category-ads-all' # 匹配本地 IP 的插件 - tag: responseHasCNIP type: response_matcher args: ip: - 'provider:geoIP:cn' ################ 主流程 ################# - tag: ali type: forward args: upstream: - addr: https://dns.alidns.com/dns-query ip_addr: # 当服务器地址为域名时,手动指定服务器的 IP 地址。可以配置多个。如果配置了,则不会使用 bootstrap 服务器。 - 223.5.5.5 - 223.6.6.6 # - 2400:3200::1 # - 2400:3200:baba::1 trusted: true # 是否是可信服务器 - addr: tls://dns.alidns.com # 服务器地址 ip_addr: # 手动指定服务器的 IP 地址 可以配置多个 - 223.5.5.5 - 223.6.6.6 # - 2400:3200::1 # - 2400:3200:baba::1 trusted: true # 是否是可信服务器 - tag: DNSPod type: forward args: upstream: - addr: https://doh.pub/dns-query ip_addr: # 当服务器地址为域名时,手动指定服务器的 IP 地址。可以配置多个。如果配置了,则不会使用 bootstrap 服务器。 - 1.12.12.12 - 120.53.53.53 trusted: true # 是否是可信服务器 - addr: tls://dot.pub # 服务器地址 ip_addr: # 手动指定服务器的 IP 地址 可以配置多个 - 1.12.12.12 - 120.53.53.53 trusted: true # 是否是可信服务器 - tag: google type: forward args: upstream: - addr: https://dns.google/dns-query # 必需。服务器地址 ip_addr: # 当服务器地址为域名时,手动指定服务器的 IP 地址。可以配置多个。如果配置了,则不会使用 bootstrap 服务器。 - 8.8.4.4 - 8.8.8.8 # - 2001:4860:4860::8844 # - 2001:4860:4860::8888 trusted: true # 是否是可信服务器。mosdns 的移植功能,说明详见 `fast_forward` 插件。 - tag: cloudflare type: forward args: upstream: - addr: https://cloudflare-dns.com/dns-query ip_addr: - 1.0.0.1 - 1.1.1.1 # - 2606:4700:4700::1001 # - 2606:4700:4700::1111 trusted: true - tag: googleDot type: fast_forward args: upstream: - addr: tls://dns.google dial_addr: 8.8.4.4 trusted: true enable_pipeline: true - addr: tls://dns.google dial_addr: 8.8.8.8 trusted: true enable_pipeline: true # - addr: tls://dns.google # dial_addr: 2001:4860:4860::8844 # trusted: true # enable_pipeline: true # - addr: tls://dns.google # dial_addr: 2001:4860:4860::8888 # trusted: true # enable_pipeline: true - tag: cloudflareDot type: fast_forward args: upstream: - addr: tls://1dot1dot1dot1.cloudflare-dns.com dial_addr: 1.0.0.1 trusted: true enable_pipeline: true - addr: tls://1dot1dot1dot1.cloudflare-dns.com dial_addr: 1.1.1.1 trusted: true enable_pipeline: true # - addr: tls://1dot1dot1dot1.cloudflare-dns.com # dial_addr: 2606:4700:4700::1001 # trusted: true # enable_pipeline: true # - addr: tls://1dot1dot1dot1.cloudflare-dns.com # dial_addr: 2606:4700:4700::1111 # trusted: true # enable_pipeline: true - tag: googleH3 type: fast_forward args: # 上游服务器。至少要配置一个。可配置多个,会并发请求全部服务器。 upstream: - addr: https://dns.google/dns-query dial_addr: 8.8.4.4 trusted: true enable_http3: true - addr: https://dns.google/dns-query dial_addr: 8.8.8.8 trusted: true enable_http3: true # - addr: https://dns.google/dns-query # dial_addr: 2001:4860:4860::8844 # trusted: true # enable_http3: true # - addr: https://dns.google/dns-query # dial_addr: 2001:4860:4860::8888 # trusted: true # enable_http3: true - tag: cloudflareH3 type: fast_forward args: upstream: - addr: https://cloudflare-dns.com/dns-query dial_addr: 1.0.0.1 trusted: true enable_http3: true - addr: https://cloudflare-dns.com/dns-query dial_addr: 1.1.1.1 trusted: true enable_http3: true # - addr: https://cloudflare-dns.com/dns-query # dial_addr: 2606:4700:4700::1001 # trusted: true # enable_http3: true # - addr: https://cloudflare-dns.com/dns-query # dial_addr: 2606:4700:4700::1111 # trusted: true # enable_http3: true # 转发请求至本地服务器的插件 - tag: forwardLocal type: sequence args: exec: - parallel: - - ali - - DNSPod # 转发请求至远程服务器的插件 - tag: forwardRemote type: sequence args: exec: - parallel: # 并行 - - google - - cloudflare - - googleDot - - cloudflareDot - - googleH3 - - cloudflareH3 # 包含分流的逻辑的插件 - tag: mainSequence type: sequence args: exec: - queryIsRedirectDomain - queryIsHostDomain # - if: 'queryIsBlockDomain || queryIsADDomain' # 已知的广告域名 - if: queryIsBlockDomain # 已知的广告域名 exec: - _new_nxdomain_response # 生成 NXDOMAIN 应答 - _return # 立刻返回 (结束本序列) - if: queryIsDirectDomain || queryIsAppleCNDomain || queryIsCNDomain exec: - _pad_query # 填充到固定长度 - _prefer_ipv4 # 优先 IPv4 - forwardLocal - _return - if: queryIsProxyDomain || queryIsNotCNDomain exec: - _pad_query # 填充到固定长度 - _prefer_ipv4 # 优先 IPv4 - forwardRemote - _return # 剩下的未知域名用 IP 分流。详细分流原理请参考 `fallback` 的工作流程。 - primary: - _pad_query - _prefer_ipv4 - forwardRemote secondary: - _pad_query - _prefer_ipv4 - forwardLocal - if: '(!responseHasCNIP) && [_response_valid_answer]' exec: - _drop_response stat_length: 10 # 常规 fallback 统计长度。默认 0,禁用常规 fallback。可省略。 threshold: 5 # 常规 fallback 触发阈值。不可大于统计长度。默认 0。可省略。 fast_fallback: 200 # # 快速 fallback 触发阈值。单位: 毫秒。默认: 0,禁用快速 fallback。可省略。 always_standby: true # 快速 fallback secondary 始终待命。可省略。 # 修改应答 ttl - tag: modifyTTL type: ttl args: minimal_ttl: 300 maximum_ttl: 3600 # IP 反查域名 - tag: PTR type: reverse_lookup args: size: 65535 # 内置缓存大小 ttl: 1800 #缓存时间。秒。默认 1800 (30min)。应答记录的 TTL 也会被自动修改,限制在该值以下。 handle_ptr: true # 是否主动处理/响应 PTR 请求。如果 PTR 的 IP 命中缓存,则生成应答后自动 _return,跳过后续插件运行。没有命中缓存则不会主动响应。会继续传递给后续插件。 # 缓存 - tag: memCache type: cache args: size: 4096 # 条目数量 lazy_cache_ttl: 86400 # lazy cache 生存时间 一天 - tag: entry type: sequence args: exec: - _misc_optm - PTR # ip 反差域名 # - _query_summary - memCache # 启用缓存 - mainSequence # 运行主执行序列 - modifyTTL # 修改 ttl # 服务器设置 servers: - exec: entry # 本服务器运行插件的 tag。 timeout: 5 # 请求处理超时时间。单位: 秒。默认: 5。 listeners: # 监听设置 - protocol: udp addr: ':53' - protocol: tcp addr: ':53' # API 入口设置 api: http: ':8080'
规则文件更新脚本
#!/bin/bash mosdnsDir="/volume1/docker/mosdns" # create tmp directory mkdir -p ${mosdnsDir}/tmp echo "########################### start download files ###########################" # download related files to tmp directory fileList="geoip geosite" for filename in $fileList; do newFilename=(${filename//-/_}) echo -e "\n downloading https://raw.githubusercontent.com/Loyalsoldier/v2ray-rules-dat/release/${newFilename}.dat ... \n" curl -C - --retry 10 https://raw.githubusercontent.com/Loyalsoldier/v2ray-rules-dat/release/${filename}.dat >> ${mosdnsDir}/tmp/${newFilename}.dat done echo "########################### all files download successfully! ###########################" # move all tmp directory files to rules directory mv -f ${mosdnsDir}/tmp/* ${mosdnsDir}/ # force delete tmp directory rm -rf ${mosdnsDir}/tmp # 重启 mosdns docker restart mosdns echo "mosdns restarted"
创建 mosdns
容器
docker run -d --net=macvlan --ip=10.0.0.7 --ip6=fc00:1::7 --mac-address=02:42:0a:00:00:07 --name=mosdns --restart=always -v /volume1/docker/mosdns:/etc/mosdns irinesistiana/mosdns:latest
Crontab
定时任务配置
开启
crontab
日志Debian 11
默认系统未开启
crontab
日志,需要手动开启。vi /etc/rsyslog.conf
去掉如下内容的注释
#
#cron.* /var/log/cron.log
重启日志服务
/etc/init.d/rsyslog restart # or systemctl restart rsyslog
Debian 12
Debian 12 默认使用
Systemd-Journald
进行日志记录,不推荐使用Rsyslog
进行日志记录,以防止日志消息在磁盘上写入两次.。通过journalctl
命令行工具来访问日志,可以通过/etc/systemd/journald.conf
来配置systemd-journald
。
创建定时任务
修改
crontab
配置文件vi /etc/crontab
在配置文件末尾新增内容如下:
30 1 * * * root /volume1/docker/mosdns/bin/updator.sh
每天 1:30 以 root 身份执行更新脚本
重新加载
crontab
配置文件/etc/init.d/cron restart # or systemctl restart cron
查看日志信息
tail -f /var/log/cron.log # debian 11 journalctl | grep cron # debian 12
部署 clash 容器
创建 clash
配置文件
目录结构
. ├── Country.mmdb ├── config.yaml ├── custom_rules │ ├── direct.yaml │ ├── proxy.yaml │ └── reject.yaml ├── dashboard │ ├── CNAME │ ├── assets │ │ ├── index-5e90ca00.js │ │ ├── index-6d88662b.css │ │ ├── logo-b453e72f.png │ │ └── vendor-827b5617.js │ ├── index.html │ ├── manifest.webmanifest │ ├── sw.js │ └── workbox-e0782b83.js ├── online_rules │ ├── apple.yaml │ ├── applications.yaml │ ├── cncidr.yaml │ ├── direct.yaml │ ├── gfw.yaml │ ├── google.yaml │ ├── greatfire.yaml │ ├── icloud.yaml │ ├── lancidr.yaml │ ├── private.yaml │ ├── proxy.yaml │ ├── reject.yaml │ ├── telegramcidr.yaml │ └── tld-not-cn.yaml └── proxies
说明
config.yaml
和dashboard
是必须的。dashboard
内容可以从这里下载。
需要将订阅地址添加到./custom_rules/direct.yaml
文件: 修改文件内subscribe.xxxxxx.com
为订阅地址即可。
-
这里我们将所有的
DNS
请求交由Mosdns
处理。
宿主机创建 tun
设备
ip tuntap add dev tun mod tun
删除
tun
设备ip tuntap del dev tun mod tun
创建 clash
容器
docker run -d --net=macvlan --ip=10.0.0.8 --ip6=fc00:1::8 --mac-address=02:42:0a:00:00:08 --name=clash-premium --restart=always --hostname=clash --env TZ=Asia/Shanghai --cap-add=NET_ADMIN --device=/dev/net/tun:/dev/net/tun -v /volume1/docker/clash:/root/.config/clash dreamacro/clash-premium:latest
将需要的流量分流到 clash
由于依赖个人的网路环境,分流方式省略。
部署个人导航站点
站点静态资源获取及定制化
下载预编译的静态资源
可以从 picopock/navigation_website github release 页面下载构建好的产物,然后搜索替换相应的内容即可。
从源码构建
拉取 picopock/navigation_website 项目代码
拉取项目代码
git clone https://github.com/picopock/navigation_website.git
安装依赖
yarn
启动开发服务
yarn dev --host # 启动服务并且打开一个浏览器窗口 yarn dev -- --open
自定义站点内容
修改站点信息
站点相关信息保存在
src/lib/constants/siteInfo.ts
文件中。字段 说明 siteName
站点名称 siteCopyright
站点版权信息 siteICPFiling
工信部备案信息 siteICPFilingURL
工信部备案信息查询链接 sitePoliceFiling
公安备案信息 sitePoliceFilingURL
公安备案信息查询链接 修改导航条目信息
导航信息保存在
src/lib/constants/nav.ts
文件中。站点列表定义
定义多个站点,用于多个站点间切换。
站点字段 类型 说明 name
string 站点名称。如 北京
、上海
。value
string 站点索引。后续用来定义站点地址。 默认站点索引
defaultSiteIndex
定义的站点列表中默认展示的站点的索引。索引从
0
开始,默认值:0
。
导航列表定义
导航项字段 类型 说明 name
string 导航项的名称 link
object 导航项不同站点的链接。 source
string 导航项 logo
。支持http(s)
、image
、svg
格式说明:如果
source
字段不是http(s)
格式,比如jellyfin.svg
,需要将jellyfin.svg
添加到static/navIcons/
目录下。说明:如果
link
字段下未定义当前选中站点的链接,则不会展示该导航项。举个🌰
export const siteList: ISite[] = [ { name: '上海', value: 'ShangHai' }, { name: '北京', value: 'BeiJing' }, { name: '广州', value: 'GuangZhou' }, { name: '深圳', value: 'ShenZhen' }, ]; export const navList: INavItem[] = [ { name: 'RouterOS', link: { ShangHai: 'http://10.0.0.1', BeiJing: 'http://10.1.0.1' }, source: 'routerOS.png' }, { name: 'OpenWRT', link: { ShangHai: 'http://10.0.0.2', BeiJing: 'http://10.1.0.2' }, source: 'openWRT.png' }, { name: 'Esxi', link: { ShangHai: 'http://10.0.0.3', BeiJing: 'http://10.1.0.3', GuangZhou: 'http://10.2.0.3', }, source: 'esxi.png' }, { name: 'K2P', link: { ShangHai: 'http://10.0.0.4', BeiJing: 'http://10.1.0.4', ShenZhen: 'http://10.2.0.4', }, source: 'phicomm.png' } ];
修改头像
将头像添加到
static/
目录下.重命名头像为
avatar.png
。
构建产物
本地构建
yarn build
站点产物保存在
build
目录下。使用
github action
构建Fork
一份项目代码到自己的github
仓库, 然后按照上面的内容修改后,推送提交到github
,然后打上tag
并推送到仓库会自动开始构建,构建产物在release
中。
部署
通过构建 docker 镜像,以容器的方式部署导航站点
使用静态资源服务器部署站点
这里我们将上一步的构建产物上传到宿主机
/volume1/docker/nginx/web/navigation
目录下,然后我们在下一节部署一个 Nginx 容器作为静态文件服务。
部署 Nginx 容器作为静态文件服务
nginx
文件目录如下:. ├── config # nginx config folder │ └── nginx.conf └── web # nginx static resource folder └── navigation ... └── xxx.yyy
nginx
配置文件/volume1/docker/nginx/config/nginx.conf
内容如下:user nginx nginx; worker_processes 1; # error_log logs/error.log; # error_log logs/error.log notice; # error_log logs/error.log info; #pid logs/nginx.pid; worker_rlimit_nofile 65535; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; gzip_min_length 1k; gzip_comp_level 2; gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml; gzip_vary on; gzip_disable "MISE [1-6]\."; server { listen 80; #charset koi8-r; # access_log logs/host.access.log main; location / { root /web/navigation; index index.html index.htm; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
配置文件中将 80 端口配置为内部导航站,导航页的静态资源位于
/volume1/docker/nginx/web/navigation
目录下创建
nginx
容器docker run -d --net=host --name=nginx --restart=always -v /volume1/docker/nginx/config/nginx.conf:/etc/nginx/nginx.conf -v /volume1/docker/nginx/web:/web nginx:latest
然后访问宿主机 IP 即可。
通过 NFS 挂载 Synology 目录
Synology Nas
开启NFS
服务Debian 11
安装autofs
服务安装
autofs
软件包:apt install autofs
启动并启用
autofs
服务:systemctl enable --now autofs
Debian 11
挂载Synology Nas
目录创建
Synology
挂载目录mkdir /syno
在需要挂载用户主目录的服务器中创建名为
/etc/auto.syno
的映射文件touch /etc/auto.syno
编辑
/etc/auto.master
文件, 添加如下内容:#/misc /etc/auto.misc /syno /etc/auto.syno
编辑
auto.syno
文件,挂载Synology NAS
目录downloads -fstype=nfs,rw 10.0.0.5:/volume1/downloads media -fstype=nfs,rw 10.0.0.5:/volume1/media music -fstype=nfs,rw 10.0.0.5:/volume1/music
重新载入
autofs
服务:systemctl reload autofs
部署 Unifi Controller
创建目录
cd /volume1/docker
mkdir -p unifi/data
mkdir -p unifi/log
mkdir -p unifi/run
运行容器
docker run -d --init --net=macvlan --ip=10.0.0.10 --ip6=fc00:1::a --mac-address=02:42:0a:00:00:0a --name=unifi -e TZ='Asia/Shanghai' -e BIND_PRIV=false -e RUNAS_UID0=false --restart=always -v /volume1/docker/unifi:/unifi -v /volume1/docker/unifi/run:/unifi/run --user=unifi jacobalberty/unifi:latest
部署 qBittorrent
创建目录
cd /volume1/docker
mkdir -p qbittorrent/config
mkdir -p downloads
运行容器
docker run -d --net=macvlan --ip=10.0.0.11 --ip6=fc00:1::b --mac-address=02:42:0a:00:00:0b --name=qbittorrent -e TZ='Asia/Shanghai' -e SavePath=/downloads -e TempPath=/downloads -e WEBUI_PORT=8085 --restart=always -v /volume1/docker/qbittorrent/config:/config -v /volume1/docker/downloads:/downloads linuxserver/qbittorrent:latest
部署 siyuan 笔记
创建目录
cd /volume1/docker
mkdir -p siyuan/workspace
运行容器
docker run -d --net=macvlan --ip=10.0.0.16 --ip6=fc00:1::10 --mac-address=02:42:0a:00:00:10 --name=siyuan --restart=always --hostname=siyuan --env TZ=Asia/Shanghai -u root -v /siyuan/workspace:/siyuan/workspace b3log/siyuan:latest --workspace=/siyuan/workspace/
访问 http://10.0.0.16:6806/
即可。
Debian 11 升级到 Debian 12
更新 Debian 11 软件包到最新
apt update
apt upgrade -y
apt dist-upgrade -y
apt autoclean
apt autoremove -y
# 重启系统
systemctl reboot
升级到 Debian 12
更新软件源,替换
bullseye
为bookworm
:sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*.list
Debian 12 之后的版本引入了一个名为
non-free-firmware
的新仓库,用于专有驱动程序和软件包。如果你使用任何“闭源”的网络、显示、图形或其他驱动程序。从旧版的 Debian 升级,并且需要这些固件二进制文件,需要更新系统上的/etc/apt/sources.list
,以使用non-free-firmware
来源。sed -i 's/non-free/non-free non-free-firmware/g' /etc/apt/sources.list
更新软件包
apt update apt upgrade -y apt dist-upgrade -y
更新过程中会提示是否自动重启,选则
Yes
即可。
提示是否需要更新现有的软件,个人可以视情况而抉择。回车即不更新,输入Y
更新软件版本。更新完成后删除非必要的软件包和依赖:
apt autoclean apt autoremove -y
重启系统
systemctl reboot
查看系统版本信息
uname -a # Linux debian 6.1.0-9-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.27-1 (2023-05-08) x86_64 GNU/Linux
系统更新成功。