working version

This commit is contained in:
2026-02-13 12:15:46 +02:00
parent 1217337fbb
commit 6cff65f288
37 changed files with 2232 additions and 1872 deletions

318
README.md
View File

@@ -1,24 +1,18 @@
# flow # flow
`flow` is a CLI for managing development instances, containers, dotfiles, bootstrap profiles, and `flow` is a CLI for managing development instances, containers, dotfiles, and host bootstrap.
binary packages.
This repository contains the Python implementation of the tool and its command modules.
## What is implemented ## What is implemented
- Instance access via `flow enter` - Instance access via `flow enter`
- Container lifecycle commands under `flow dev` (`create`, `exec`, `connect`, `list`, `stop`, - Container lifecycle under `flow dev`
`remove`, `respawn`) - Dotfiles repo management (`flow dotfiles`)
- Dotfiles management (`dotfiles` / `dot`) - Bootstrap provisioning (`flow bootstrap`)
- Bootstrap planning and execution (`bootstrap` / `setup` / `provision`) - Package installs from unified manifest definitions (`flow package`)
- Binary package installation from manifest definitions (`package` / `pkg`) - Project sync checks (`flow sync`)
- Multi-repo sync checks (`sync`)
## Installation ## Installation
Build and install a standalone binary (no pip install required for use):
```bash ```bash
make build make build
make install-local make install-local
@@ -26,241 +20,163 @@ make install-local
This installs `flow` to `~/.local/bin/flow`. This installs `flow` to `~/.local/bin/flow`.
## Configuration ## Core behavior
`flow` uses XDG paths by default: ### Security model
- `~/.config/devflow/config` - `flow` must run as a regular user (root/sudo invocation is rejected).
- `~/.config/devflow/manifest.yaml` - At startup, `flow` refreshes sudo credentials once (`sudo -v`) for privileged steps.
- `~/.local/share/devflow/` - Package `post-install` hooks run without sudo by default.
- `~/.local/state/devflow/` - A package hook can use sudo only when `allow_sudo: true` is explicitly set.
### `config` (INI) ### Config location and merge rules
```ini `flow` loads all YAML files from:
[repository]
dotfiles_url = git@github.com:you/dotfiles.git
dotfiles_branch = main
[paths] 1. `~/.local/share/flow/dotfiles/_shared/flow/.config/flow/` (self-hosted, if present)
projects_dir = ~/projects 2. `~/.config/flow/` (local fallback)
[defaults] Files are read alphabetically (`*.yaml` and `*.yml`) and merged at top level.
container_registry = registry.tomastm.com If the same top-level key appears in multiple files, the later filename wins.
container_tag = latest
tmux_session = default
[targets] ### Dotfiles layout (flat with reserved dirs)
# Format A: namespace = platform ssh_host [ssh_identity]
personal = orb personal.orb
# Format B: namespace@platform = ssh_host [ssh_identity] Inside your dotfiles repo root:
work@ec2 = work.internal ~/.ssh/id_work
```text
_shared/
flow/
.config/flow/
config.yaml
packages.yaml
profiles.yaml
git/
.gitconfig
_root/
general/
etc/
hostname
linux-auto/
nvim/
.config/nvim/init.lua
``` ```
## Manifest format - `_shared/`: linked for all profiles
- `_root/`: linked to absolute paths (via sudo), e.g. `_root/etc/hostname -> /etc/hostname`
- every other directory at this level is a profile name
- when `_shared` and profile conflict on the same target file, profile wins
The manifest is YAML with these top-level sections used by the current code: ## Manifest model
- `profiles` for bootstrap profiles Top-level keys:
- `binaries` for package definitions
- `package-map` for cross-package-manager name mapping
`environments` is no longer supported. - `profiles`
- `packages`
- optional global settings like `repository`, `paths`, `defaults`, `targets`
Example: `environments` is not supported.
### Packages (unified)
```yaml ```yaml
profiles: packages:
linux-vm: - name: fd
os: linux type: pkg
hostname: "$HOSTNAME" sources:
shell: zsh apt: fd-find
locale: en_US.UTF-8 dnf: fd-find
requires: [HOSTNAME] brew: fd
packages:
standard: [git, tmux, zsh, fd]
binary: [neovim]
ssh_keygen:
- type: ed25519
comment: "$USER@$HOSTNAME"
runcmd:
- mkdir -p ~/projects
package-map: - name: wezterm
fd: type: cask
apt: fd-find sources:
dnf: fd-find brew: wezterm
brew: fd
binaries: - name: neovim
neovim: type: binary
source: github:neovim/neovim source: github:neovim/neovim
version: "0.10.4" version: "0.10.4"
asset-pattern: "nvim-{{os}}-{{arch}}.tar.gz" asset-pattern: "nvim-{{os}}-{{arch}}.tar.gz"
platform-map: platform-map:
linux-amd64: { os: linux, arch: x86_64 } linux-x64: { os: linux, arch: x64 }
linux-arm64: { os: linux, arch: arm64 } linux-arm64: { os: linux, arch: arm64 }
macos-arm64: { os: macos, arch: arm64 } darwin-arm64: { os: macos, arch: arm64 }
install-script: | extract-dir: "nvim-{{os}}64"
curl -fL "{{downloadUrl}}" -o /tmp/nvim.tar.gz install:
tar -xzf /tmp/nvim.tar.gz -C /tmp bin: [bin/nvim]
rm -rf ~/.local/bin/nvim share: [share/nvim]
cp /tmp/nvim-*/bin/nvim ~/.local/bin/nvim man: [share/man/man1/nvim.1]
lib: [lib/libnvim.so]
``` ```
### Profile package syntaxes
All are supported in one profile list:
```yaml
profiles:
macos-dev:
os: macos
packages:
- git
- cask/wezterm
- binary/neovim
- name: docker
allow_sudo: true
post-install: |
sudo groupadd docker || true
sudo usermod -aG docker $USER
```
### Templates
- `{{ env.VAR_NAME }}`
- `{{ version }}`
- `{{ os }}`
- `{{ arch }}`
### Bootstrap profile features
- `os` is required (`linux` or `macos`)
- `package-manager` optional (auto-detected if omitted)
- default locale is `en_US.UTF-8`
- shell auto-install + `chsh` when `shell:` is declared and missing
- `requires` validation for required env vars
- `ssh-keygen` definitions
- `runcmd` (runs after package installation)
- automatic config linking (`_shared` + profile + `_root`)
- `post-link` hook (runs after symlink phase)
- config skip patterns:
- package names (e.g. `nvim`)
- `_shared`
- `_profile`
- `_root`
## Command overview ## Command overview
### Enter instances
```bash ```bash
flow enter personal@orb flow enter personal@orb
flow enter root@personal@orb
flow enter personal@orb --dry-run
```
If your local terminal uses `xterm-ghostty` or `wezterm`, `flow enter` shows a terminfo warning and
a manual fix command before connecting. `flow` never installs terminfo on the target automatically.
### Containers
```bash
flow dev create api -i tm0/node -p ~/projects/api flow dev create api -i tm0/node -p ~/projects/api
flow dev connect api
flow dev exec api -- npm test
flow dev list
flow dev stop api
flow dev remove api
```
### Dotfiles
```bash
flow dotfiles init --repo git@github.com:you/dotfiles.git flow dotfiles init --repo git@github.com:you/dotfiles.git
flow dotfiles link flow dotfiles link --profile linux-auto
flow dotfiles status flow dotfiles status
flow dotfiles relink
flow dotfiles clean --dry-run
flow dotfiles repo status
flow dotfiles repo pull --relink
flow dotfiles repo push
```
### Bootstrap
```bash
flow bootstrap list flow bootstrap list
flow bootstrap show linux-vm flow bootstrap show linux-auto
flow bootstrap packages --profile linux-vm flow bootstrap run --profile linux-auto --var USER_EMAIL=you@example.com
flow bootstrap packages --profile linux-vm --resolved
flow bootstrap run --profile linux-vm --var HOSTNAME=devbox
flow bootstrap run --profile linux-vm --dry-run
```
`flow bootstrap` auto-detects the package manager (`brew`, `apt`, `dnf`) when
`package-manager` is not set in a profile.
### Packages
```bash
flow package install neovim flow package install neovim
flow package list
flow package list --all flow package list --all
flow package remove neovim
```
### Sync
```bash
flow sync check flow sync check
flow sync check --no-fetch
flow sync fetch
flow sync summary
```
### Completion
```bash
flow completion install-zsh
flow completion zsh
```
## Self-hosted config priority
When present, `flow` prefers config from a linked dotfiles package:
1. `~/.local/share/devflow/dotfiles/flow/.config/flow/config`
2. `~/.config/devflow/config`
And for manifest:
1. `~/.local/share/devflow/dotfiles/flow/.config/flow/manifest.yaml`
2. `~/.config/devflow/manifest.yaml`
Passing an explicit file path to internal loaders bypasses this cascade.
## State format policy
`flow` currently supports only the v2 dotfiles link state format (`linked.json`). Older state
formats are intentionally not supported.
## CLI behavior
- User errors return non-zero exit codes.
- External command failures are surfaced as concise one-line errors (no traceback spam).
- `Ctrl+C` exits with code `130`.
## Zsh completion
Recommended one-shot install:
```bash
flow completion install-zsh flow completion install-zsh
``` ```
Manual install (equivalent):
```bash
mkdir -p ~/.zsh/completions
flow completion zsh > ~/.zsh/completions/_flow
```
Then ensure your `.zshrc` includes:
```bash
fpath=(~/.zsh/completions $fpath)
autoload -Uz compinit && compinit
```
Completion is dynamic and pulls values from your current config/manifest/state (for example
bootstrap profiles, package names, dotfiles packages, and configured `enter` targets).
## Development ## Development
Binary build (maintainers):
```bash
python3 -m pip install pyinstaller
make build
make install-local
```
Useful targets:
```bash
make clean
```
Run tests:
```bash
python3 -m pytest
```
Local development setup:
```bash ```bash
python3 -m venv .venv python3 -m venv .venv
.venv/bin/pip install -e ".[dev]" .venv/bin/pip install -e ".[dev]"
.venv/bin/pytest python3 -m pytest
``` ```

View File

@@ -1,27 +1,27 @@
# Example working scenario # Example working scenario
This folder contains a complete, practical dotfiles + bootstrap setup that exercises most `flow` This folder contains a complete dotfiles + bootstrap setup for the current `flow` schema.
features.
## What this example shows ## What this example shows
- Dotfiles repository layout with `common/` packages and `profiles/work/` overrides - Flat repo-root layout with reserved dirs:
- Self-hosted `flow` config + manifest in `common/flow/.config/flow/` - `_shared/` (shared configs)
- Bootstrap profiles for Linux (auto PM detection), Ubuntu (`apt`), Fedora (`dnf`), and macOS - `_root/` (root-targeted configs)
(`brew`) - profile dirs (`linux-auto/`, `macos-dev/`)
- Bootstrap actions: `requires`, `hostname`, `locale`, `shell`, package install, binary install, - Unified YAML config under `_shared/flow/.config/flow/*.yaml`
`ssh_keygen`, `configs`, and `runcmd` - Profile package list syntax: string, type prefix, and object entries
- Package name mapping via `package-map` (`apt`/`dnf`/`brew`) - Binary install definition with `asset-pattern`, `platform-map`, `extract-dir`, and `install`
- Dotfiles repo workflows: `status`, `pull`, `push`, `sync --relink`, and `edit` - Required env vars, templating, SSH keygen, runcmd, post-link, and config skip patterns
## Layout ## Layout
- `dotfiles-repo/common/flow/.config/flow/config` example `flow` config - `dotfiles-repo/_shared/flow/.config/flow/config.yaml`
- `dotfiles-repo/common/flow/.config/flow/manifest.yaml` profiles + package map + binaries - `dotfiles-repo/_shared/flow/.config/flow/packages.yaml`
- `dotfiles-repo/common/zsh/.zshrc`, `common/git/.gitconfig`, `common/tmux/.tmux.conf` - `dotfiles-repo/_shared/flow/.config/flow/profiles.yaml`
- `dotfiles-repo/common/nvim/.config/nvim/init.lua` - `dotfiles-repo/_shared/...`
- `dotfiles-repo/common/bin/.local/bin/flow-hello` - `dotfiles-repo/_root/...`
- `dotfiles-repo/profiles/work/git/.gitconfig` and `profiles/work/zsh/.zshrc` overrides - `dotfiles-repo/linux-auto/...`
- `dotfiles-repo/macos-dev/...`
## Quick start ## Quick start
@@ -35,7 +35,7 @@ Initialize and link dotfiles:
```bash ```bash
flow dotfiles init --repo "$EXAMPLE_REPO" flow dotfiles init --repo "$EXAMPLE_REPO"
flow dotfiles link flow dotfiles link --profile linux-auto
flow dotfiles status flow dotfiles status
``` ```
@@ -43,15 +43,15 @@ Check repo commands:
```bash ```bash
flow dotfiles repo status flow dotfiles repo status
flow dotfiles repo pull --relink flow dotfiles repo pull --relink --profile linux-auto
flow dotfiles repo push flow dotfiles repo push
``` ```
Edit package or file/path targets: Edit package or file/path targets:
```bash ```bash
flow dotfiles edit zsh --no-commit flow dotfiles edit git --no-commit
flow dotfiles edit common/flow/.config/flow/manifest.yaml --no-commit flow dotfiles edit _shared/flow/.config/flow/profiles.yaml --no-commit
``` ```
Inspect bootstrap profiles and package resolution: Inspect bootstrap profiles and package resolution:
@@ -59,20 +59,13 @@ Inspect bootstrap profiles and package resolution:
```bash ```bash
flow bootstrap list flow bootstrap list
flow bootstrap packages --resolved flow bootstrap packages --resolved
flow bootstrap packages --profile fedora-dev --resolved flow bootstrap packages --profile linux-auto --resolved
flow bootstrap show linux-auto flow bootstrap show linux-auto
``` ```
Run bootstrap in dry-run mode: Run bootstrap dry-run:
```bash ```bash
flow bootstrap run --profile linux-auto --var TARGET_HOSTNAME=devbox --var USER_EMAIL=you@example.com --dry-run flow bootstrap run --profile linux-auto --var TARGET_HOSTNAME=devbox --var USER_EMAIL=you@example.com --dry-run
flow bootstrap run --profile work-linux --var WORK_EMAIL=you@company.com --dry-run flow bootstrap run --profile macos-dev --dry-run
``` ```
## Manifest notes
- `linux-auto` omits `package-manager` to demonstrate auto-detection.
- `ubuntu-dev` uses legacy `packages.package` key to show compatibility.
- `package-map` rewrites logical names like `fd` and `python-dev` per package manager.
- If mapping is missing for the selected manager, `flow` uses the original package name and warns.

View File

@@ -0,0 +1 @@
{{ env.TARGET_HOSTNAME }}

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env sh
echo "custom root script"

View File

@@ -0,0 +1,15 @@
repository:
dotfiles-url: /ABSOLUTE/PATH/TO/flow-cli/example/dotfiles-repo
dotfiles-branch: main
paths:
projects-dir: ~/projects
defaults:
container-registry: registry.example.com
container-tag: latest
tmux-session: default
targets:
personal: orb personal.orb
work@ec2: work.internal ~/.ssh/id_work

View File

@@ -0,0 +1,37 @@
packages:
- name: fd
type: pkg
sources:
apt: fd-find
dnf: fd-find
brew: fd
- name: ripgrep
type: pkg
sources:
apt: ripgrep
dnf: ripgrep
brew: ripgrep
- name: wezterm
type: cask
sources:
brew: wezterm
- name: neovim
type: binary
source: github:neovim/neovim
version: "0.10.4"
asset-pattern: "nvim-{{os}}-{{arch}}.tar.gz"
platform-map:
linux-x64: { os: linux, arch: x64 }
linux-arm64: { os: linux, arch: arm64 }
darwin-arm64: { os: macos, arch: arm64 }
extract-dir: "nvim-{{os}}64"
install:
bin: [bin/nvim]
share: [share/nvim]
man: [share/man/man1/nvim.1]
- name: docker
type: pkg

View File

@@ -0,0 +1,39 @@
profiles:
linux-auto:
os: linux
requires: [TARGET_HOSTNAME, USER_EMAIL]
hostname: "{{ env.TARGET_HOSTNAME }}"
shell: zsh
packages:
- git
- tmux
- zsh
- fd
- ripgrep
- binary/neovim
- name: docker
allow_sudo: true
post-install: |
sudo groupadd docker || true
sudo usermod -aG docker $USER
ssh-keygen:
- type: ed25519
filename: id_ed25519
comment: "{{ env.USER_EMAIL }}"
configs:
skip: [tmux]
runcmd:
- mkdir -p ~/projects
- git config --global user.email "{{ env.USER_EMAIL }}"
post-link: |
echo "All configs linked."
echo "Restart your shell to apply changes."
macos-dev:
os: macos
shell: zsh
packages:
- git
- tmux
- cask/wezterm
- binary/neovim

View File

@@ -0,0 +1,4 @@
export EDITOR=vim
export PATH="$HOME/.local/bin:$PATH"
alias ll='ls -lah'

View File

@@ -1,15 +0,0 @@
[repository]
dotfiles_url = /ABSOLUTE/PATH/TO/flow-cli/example/dotfiles-repo
dotfiles_branch = main
[paths]
projects_dir = ~/projects
[defaults]
container_registry = registry.example.com
container_tag = latest
tmux_session = default
[targets]
personal = orb personal.orb
work@ec2 = work.internal ~/.ssh/id_work

View File

@@ -1,2 +0,0 @@
export FLOW_ENV=example
export FLOW_EDITOR=vim

View File

@@ -1,96 +0,0 @@
profiles:
linux-auto:
os: linux
requires: [TARGET_HOSTNAME, USER_EMAIL]
hostname: "$TARGET_HOSTNAME"
locale: en_US.UTF-8
shell: zsh
packages:
standard: [git, tmux, zsh, fd, ripgrep, python-dev]
binary: [neovim, lazygit]
ssh_keygen:
- type: ed25519
filename: id_ed25519
comment: "$USER_EMAIL"
configs: [flow, zsh, git, tmux, nvim, bin]
runcmd:
- mkdir -p ~/projects
- git config --global user.email "$USER_EMAIL"
ubuntu-dev:
os: linux
package-manager: apt
packages:
package: [git, tmux, zsh, fd, ripgrep, python-dev]
binary: [neovim]
configs: [flow, zsh, git, tmux]
fedora-dev:
os: linux
package-manager: dnf
packages:
standard: [git, tmux, zsh, fd, ripgrep, python-dev]
binary: [neovim]
configs: [flow, zsh, git, tmux]
macos-dev:
os: macos
package-manager: brew
packages:
standard: [git, tmux, zsh, fd, ripgrep]
cask: [wezterm]
binary: [neovim]
configs: [flow, zsh, git, nvim]
work-linux:
os: linux
package-manager: apt
requires: [WORK_EMAIL]
packages:
standard: [git, tmux, zsh]
configs: [git, zsh]
runcmd:
- git config --global user.email "$WORK_EMAIL"
package-map:
fd:
apt: fd-find
dnf: fd-find
brew: fd
python-dev:
apt: python3-dev
dnf: python3-devel
brew: python
ripgrep:
apt: ripgrep
dnf: ripgrep
brew: ripgrep
binaries:
neovim:
source: github:neovim/neovim
version: "0.10.4"
asset-pattern: "nvim-{{os}}-{{arch}}.tar.gz"
platform-map:
linux-amd64: { os: linux, arch: x86_64 }
linux-arm64: { os: linux, arch: arm64 }
macos-arm64: { os: macos, arch: arm64 }
install-script: |
curl -fL "{{downloadUrl}}" -o /tmp/nvim.tar.gz
tar -xzf /tmp/nvim.tar.gz -C /tmp
mkdir -p ~/.local/bin
cp /tmp/nvim-*/bin/nvim ~/.local/bin/nvim
lazygit:
source: github:jesseduffield/lazygit
version: "0.44.1"
asset-pattern: "lazygit_{{version}}_{{os}}_{{arch}}.tar.gz"
platform-map:
linux-amd64: { os: Linux, arch: x86_64 }
linux-arm64: { os: Linux, arch: arm64 }
macos-arm64: { os: Darwin, arch: arm64 }
install-script: |
curl -fL "{{downloadUrl}}" -o /tmp/lazygit.tar.gz
tar -xzf /tmp/lazygit.tar.gz -C /tmp
mkdir -p ~/.local/bin
cp /tmp/lazygit ~/.local/bin/lazygit

View File

@@ -1,8 +0,0 @@
export EDITOR=vim
export PATH="$HOME/.local/bin:$PATH"
alias ll='ls -lah'
if [ -f "$HOME/.config/flow/env.sh" ]; then
. "$HOME/.config/flow/env.sh"
fi

View File

@@ -0,0 +1,3 @@
[user]
name = Example Linux User
email = linux@example.com

View File

@@ -3,5 +3,3 @@ export PATH="$HOME/.local/bin:$PATH"
alias ll='ls -lah' alias ll='ls -lah'
alias gs='git status -sb' alias gs='git status -sb'
export WORK_MODE=1

View File

@@ -1,6 +0,0 @@
[user]
name = Example Work User
email = work@example.com
[url "git@github.com:work/"]
insteadOf = https://github.com/work/

View File

@@ -1,6 +1,8 @@
"""CLI entry point — argparse routing and context creation.""" """CLI entry point — argparse routing and context creation."""
import argparse import argparse
import os
import shutil
import subprocess import subprocess
import sys import sys
@@ -14,6 +16,27 @@ from flow.core.platform import detect_platform
COMMAND_MODULES = [enter, container, dotfiles, bootstrap, package, sync, completion] COMMAND_MODULES = [enter, container, dotfiles, bootstrap, package, sync, completion]
def _ensure_non_root(console: ConsoleLogger) -> None:
if os.geteuid() == 0:
console.error("flow must be run as a regular user (not root/sudo)")
sys.exit(1)
def _refresh_sudo_credentials(console: ConsoleLogger) -> None:
if os.environ.get("FLOW_SKIP_SUDO_REFRESH") == "1":
return
if not shutil.which("sudo"):
console.error("sudo is required but was not found in PATH")
sys.exit(1)
try:
subprocess.run(["sudo", "-v"], check=True)
except subprocess.CalledProcessError:
console.error("Failed to refresh sudo credentials")
sys.exit(1)
def main(): def main():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
prog="flow", prog="flow",
@@ -34,6 +57,9 @@ def main():
parser.print_help() parser.print_help()
sys.exit(0) sys.exit(0)
console = ConsoleLogger()
_ensure_non_root(console)
if args.command == "completion": if args.command == "completion":
handler = getattr(args, "handler", None) handler = getattr(args, "handler", None)
if handler: if handler:
@@ -43,7 +69,7 @@ def main():
return return
ensure_dirs() ensure_dirs()
console = ConsoleLogger() _refresh_sudo_credentials(console)
try: try:
platform_info = detect_platform() platform_info = detect_platform()

File diff suppressed because it is too large Load Diff

View File

@@ -115,7 +115,17 @@ def _list_bootstrap_profiles() -> List[str]:
def _list_manifest_packages() -> List[str]: def _list_manifest_packages() -> List[str]:
manifest = _safe_manifest() manifest = _safe_manifest()
return sorted(manifest.get("binaries", {}).keys()) packages = manifest.get("packages", [])
if not isinstance(packages, list):
return []
names = []
for pkg in packages:
if isinstance(pkg, dict) and isinstance(pkg.get("name"), str):
if str(pkg.get("type", "pkg")) == "binary":
names.append(pkg["name"])
return sorted(set(names))
def _list_installed_packages() -> List[str]: def _list_installed_packages() -> List[str]:
@@ -132,36 +142,47 @@ def _list_installed_packages() -> List[str]:
def _list_dotfiles_profiles() -> List[str]: def _list_dotfiles_profiles() -> List[str]:
profiles_dir = DOTFILES_DIR / "profiles" flow_dir = DOTFILES_DIR
if not profiles_dir.is_dir(): if not flow_dir.is_dir():
return [] return []
return sorted([p.name for p in profiles_dir.iterdir() if p.is_dir() and not p.name.startswith(".")])
return sorted(
[
p.name
for p in flow_dir.iterdir()
if p.is_dir() and not p.name.startswith(".") and not p.name.startswith("_")
]
)
def _list_dotfiles_packages(profile: Optional[str] = None) -> List[str]: def _list_dotfiles_packages(profile: Optional[str] = None) -> List[str]:
package_names: Set[str] = set() package_names: Set[str] = set()
flow_dir = DOTFILES_DIR
common = DOTFILES_DIR / "common" if not flow_dir.is_dir():
if common.is_dir(): return []
for pkg in common.iterdir():
shared = flow_dir / "_shared"
if shared.is_dir():
for pkg in shared.iterdir():
if pkg.is_dir() and not pkg.name.startswith("."): if pkg.is_dir() and not pkg.name.startswith("."):
package_names.add(pkg.name) package_names.add(pkg.name)
if profile: if profile:
profile_dir = DOTFILES_DIR / "profiles" / profile profile_dir = flow_dir / profile
if profile_dir.is_dir(): if profile_dir.is_dir():
for pkg in profile_dir.iterdir(): for pkg in profile_dir.iterdir():
if pkg.is_dir() and not pkg.name.startswith("."): if pkg.is_dir() and not pkg.name.startswith("."):
package_names.add(pkg.name) package_names.add(pkg.name)
else: else:
profiles_dir = DOTFILES_DIR / "profiles" for profile_dir in flow_dir.iterdir():
if profiles_dir.is_dir(): if profile_dir.name.startswith(".") or profile_dir.name.startswith("_"):
for profile_dir in profiles_dir.iterdir(): continue
if not profile_dir.is_dir(): if not profile_dir.is_dir():
continue continue
for pkg in profile_dir.iterdir(): for pkg in profile_dir.iterdir():
if pkg.is_dir() and not pkg.name.startswith("."): if pkg.is_dir() and not pkg.name.startswith("."):
package_names.add(pkg.name) package_names.add(pkg.name)
return sorted(package_names) return sorted(package_names)

View File

@@ -1,4 +1,4 @@
"""flow dotfiles — dotfile management with GNU Stow-style symlinking.""" """flow dotfiles — dotfile management with flat repo layout."""
import argparse import argparse
import json import json
@@ -7,48 +7,53 @@ import shlex
import shutil import shutil
import subprocess import subprocess
import sys import sys
from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Dict, List, Optional, Set
from flow.core.config import FlowContext from flow.core.config import FlowContext
from flow.core.paths import DOTFILES_DIR, LINKED_STATE from flow.core.paths import DOTFILES_DIR, LINKED_STATE
from flow.core.stow import LinkTree, TreeFolder
RESERVED_SHARED = "_shared"
RESERVED_ROOT = "_root"
@dataclass
class LinkSpec:
source: Path
target: Path
package: str
is_directory_link: bool = False
def register(subparsers): def register(subparsers):
p = subparsers.add_parser("dotfiles", aliases=["dot"], help="Manage dotfiles") p = subparsers.add_parser("dotfiles", aliases=["dot"], help="Manage dotfiles")
sub = p.add_subparsers(dest="dotfiles_command") sub = p.add_subparsers(dest="dotfiles_command")
# init
init = sub.add_parser("init", help="Clone dotfiles repository") init = sub.add_parser("init", help="Clone dotfiles repository")
init.add_argument("--repo", help="Override repository URL") init.add_argument("--repo", help="Override repository URL")
init.set_defaults(handler=run_init) init.set_defaults(handler=run_init)
# link
link = sub.add_parser("link", help="Create symlinks for dotfile packages") link = sub.add_parser("link", help="Create symlinks for dotfile packages")
link.add_argument("packages", nargs="*", help="Specific packages to link (default: all)") link.add_argument("packages", nargs="*", help="Specific packages to link (default: all)")
link.add_argument("--profile", help="Profile to use for overrides") link.add_argument("--profile", help="Profile to use")
link.add_argument("--copy", action="store_true", help="Copy instead of symlink") link.add_argument("--copy", action="store_true", help="Copy instead of symlink")
link.add_argument("--force", action="store_true", help="Overwrite existing files") link.add_argument("--force", action="store_true", help="Overwrite existing files")
link.add_argument("--dry-run", action="store_true", help="Show what would be done") link.add_argument("--dry-run", action="store_true", help="Show what would be done")
link.set_defaults(handler=run_link) link.set_defaults(handler=run_link)
# unlink
unlink = sub.add_parser("unlink", help="Remove dotfile symlinks") unlink = sub.add_parser("unlink", help="Remove dotfile symlinks")
unlink.add_argument("packages", nargs="*", help="Specific packages to unlink (default: all)") unlink.add_argument("packages", nargs="*", help="Specific packages to unlink (default: all)")
unlink.set_defaults(handler=run_unlink) unlink.set_defaults(handler=run_unlink)
# status
status = sub.add_parser("status", help="Show dotfiles link status") status = sub.add_parser("status", help="Show dotfiles link status")
status.set_defaults(handler=run_status) status.set_defaults(handler=run_status)
# sync
sync = sub.add_parser("sync", help="Pull latest dotfiles from remote") sync = sub.add_parser("sync", help="Pull latest dotfiles from remote")
sync.add_argument("--relink", action="store_true", help="Run relink after pull") sync.add_argument("--relink", action="store_true", help="Run relink after pull")
sync.add_argument("--profile", help="Profile to use when relinking") sync.add_argument("--profile", help="Profile to use when relinking")
sync.set_defaults(handler=run_sync) sync.set_defaults(handler=run_sync)
# repo
repo = sub.add_parser("repo", help="Manage dotfiles repository") repo = sub.add_parser("repo", help="Manage dotfiles repository")
repo_sub = repo.add_subparsers(dest="dotfiles_repo_command") repo_sub = repo.add_subparsers(dest="dotfiles_repo_command")
@@ -56,8 +61,18 @@ def register(subparsers):
repo_status.set_defaults(handler=run_repo_status) repo_status.set_defaults(handler=run_repo_status)
repo_pull = repo_sub.add_parser("pull", help="Pull latest changes") repo_pull = repo_sub.add_parser("pull", help="Pull latest changes")
repo_pull.add_argument("--rebase", dest="rebase", action="store_true", help="Use rebase strategy (default)") repo_pull.add_argument(
repo_pull.add_argument("--no-rebase", dest="rebase", action="store_false", help="Disable rebase strategy") "--rebase",
dest="rebase",
action="store_true",
help="Use rebase strategy (default)",
)
repo_pull.add_argument(
"--no-rebase",
dest="rebase",
action="store_false",
help="Disable rebase strategy",
)
repo_pull.add_argument("--relink", action="store_true", help="Run relink after pull") repo_pull.add_argument("--relink", action="store_true", help="Run relink after pull")
repo_pull.add_argument("--profile", help="Profile to use when relinking") repo_pull.add_argument("--profile", help="Profile to use when relinking")
repo_pull.set_defaults(rebase=True) repo_pull.set_defaults(rebase=True)
@@ -68,18 +83,15 @@ def register(subparsers):
repo.set_defaults(handler=lambda ctx, args: repo.print_help()) repo.set_defaults(handler=lambda ctx, args: repo.print_help())
# relink
relink = sub.add_parser("relink", help="Refresh symlinks after changes") relink = sub.add_parser("relink", help="Refresh symlinks after changes")
relink.add_argument("packages", nargs="*", help="Specific packages to relink (default: all)") relink.add_argument("packages", nargs="*", help="Specific packages to relink (default: all)")
relink.add_argument("--profile", help="Profile to use for overrides") relink.add_argument("--profile", help="Profile to use")
relink.set_defaults(handler=run_relink) relink.set_defaults(handler=run_relink)
# clean
clean = sub.add_parser("clean", help="Remove broken symlinks") clean = sub.add_parser("clean", help="Remove broken symlinks")
clean.add_argument("--dry-run", action="store_true", help="Show what would be done") clean.add_argument("--dry-run", action="store_true", help="Show what would be done")
clean.set_defaults(handler=run_clean) clean.set_defaults(handler=run_clean)
# edit
edit = sub.add_parser("edit", help="Edit package or path with auto-commit") edit = sub.add_parser("edit", help="Edit package or path with auto-commit")
edit.add_argument("target", help="Package name or path inside dotfiles repo") edit.add_argument("target", help="Package name or path inside dotfiles repo")
edit.add_argument("--no-commit", action="store_true", help="Skip auto-commit") edit.add_argument("--no-commit", action="store_true", help="Skip auto-commit")
@@ -88,95 +100,161 @@ def register(subparsers):
p.set_defaults(handler=lambda ctx, args: p.print_help()) p.set_defaults(handler=lambda ctx, args: p.print_help())
def _flow_config_dir(dotfiles_dir: Path = DOTFILES_DIR) -> Path:
return dotfiles_dir
def _is_root_package(package: str) -> bool:
return package == RESERVED_ROOT or package.startswith(f"{RESERVED_ROOT}/")
def _insert_spec(
desired: Dict[Path, LinkSpec],
*,
target: Path,
source: Path,
package: str,
) -> None:
existing = desired.get(target)
if existing is not None:
raise RuntimeError(
"Conflicting dotfile targets are not allowed: "
f"{target} from {existing.package} and {package}"
)
desired[target] = LinkSpec(source=source, target=target, package=package)
def _load_state() -> dict: def _load_state() -> dict:
if LINKED_STATE.exists(): if LINKED_STATE.exists():
with open(LINKED_STATE) as f: with open(LINKED_STATE, "r", encoding="utf-8") as handle:
return json.load(f) return json.load(handle)
return {"links": {}} return {"version": 2, "links": {}}
def _save_state(state: dict): def _save_state(state: dict) -> None:
LINKED_STATE.parent.mkdir(parents=True, exist_ok=True) LINKED_STATE.parent.mkdir(parents=True, exist_ok=True)
with open(LINKED_STATE, "w") as f: with open(LINKED_STATE, "w", encoding="utf-8") as handle:
json.dump(state, f, indent=2) json.dump(state, handle, indent=2)
def _discover_packages(dotfiles_dir: Path, profile: Optional[str] = None) -> dict: def _load_link_specs_from_state() -> Dict[Path, LinkSpec]:
"""Discover packages from common/ and optionally profiles/<name>/. state = _load_state()
links = state.get("links", {})
if not isinstance(links, dict):
raise RuntimeError("Unsupported linked state format. Remove linked.json and relink dotfiles.")
Returns {package_name: source_dir} with profile dirs taking precedence. resolved: Dict[Path, LinkSpec] = {}
""" for package, pkg_links in links.items():
packages = {} if not isinstance(pkg_links, dict):
common = dotfiles_dir / "common" raise RuntimeError("Unsupported linked state format. Remove linked.json and relink dotfiles.")
if common.is_dir():
for pkg in sorted(common.iterdir()):
if pkg.is_dir() and not pkg.name.startswith("."):
packages[pkg.name] = pkg
if profile: for target_str, link_info in pkg_links.items():
profile_dir = dotfiles_dir / "profiles" / profile if not isinstance(link_info, dict) or "source" not in link_info:
if profile_dir.is_dir(): raise RuntimeError(
for pkg in sorted(profile_dir.iterdir()): "Unsupported linked state format. Remove linked.json and relink dotfiles."
if pkg.is_dir() and not pkg.name.startswith("."): )
packages[pkg.name] = pkg # Override common
return packages target = Path(target_str)
resolved[target] = LinkSpec(
source=Path(link_info["source"]),
target=target,
package=str(package),
is_directory_link=bool(link_info.get("is_directory_link", False)),
)
return resolved
def _walk_package(source_dir: Path, home: Path): def _save_link_specs_to_state(specs: Dict[Path, LinkSpec]) -> None:
"""Yield (source_file, target_file) pairs for a package directory. grouped: Dict[str, Dict[str, dict]] = {}
for spec in sorted(specs.values(), key=lambda s: str(s.target)):
grouped.setdefault(spec.package, {})[str(spec.target)] = {
"source": str(spec.source),
"is_directory_link": spec.is_directory_link,
}
Files in the package directory map relative to $HOME. _save_state({"version": 2, "links": grouped})
"""
def _list_profiles(flow_dir: Path) -> List[str]:
if not flow_dir.exists() or not flow_dir.is_dir():
return []
profiles: List[str] = []
for child in flow_dir.iterdir():
if not child.is_dir():
continue
if child.name.startswith("."):
continue
if child.name.startswith("_"):
continue
profiles.append(child.name)
return sorted(profiles)
def _walk_package(source_dir: Path):
for root, _dirs, files in os.walk(source_dir): for root, _dirs, files in os.walk(source_dir):
for fname in files: for fname in files:
src = Path(root) / fname src = Path(root) / fname
rel = src.relative_to(source_dir) rel = src.relative_to(source_dir)
dst = home / rel yield src, rel
yield src, dst
def _ensure_dotfiles_dir(ctx: FlowContext): def _profile_skip_set(ctx: FlowContext, profile: Optional[str]) -> Set[str]:
if not DOTFILES_DIR.exists(): if not profile:
ctx.console.error(f"Dotfiles not found at {DOTFILES_DIR}. Run 'flow dotfiles init' first.") return set()
sys.exit(1)
profiles = ctx.manifest.get("profiles", {})
if not isinstance(profiles, dict):
return set()
profile_cfg = profiles.get(profile, {})
if not isinstance(profile_cfg, dict):
return set()
configs = profile_cfg.get("configs", {})
if not isinstance(configs, dict):
return set()
skip = configs.get("skip", [])
if not isinstance(skip, list):
return set()
return {str(item) for item in skip if item}
def _run_dotfiles_git(*cmd, capture: bool = True) -> subprocess.CompletedProcess: def _discover_packages(dotfiles_dir: Path, profile: Optional[str] = None) -> dict:
return subprocess.run( flow_dir = _flow_config_dir(dotfiles_dir)
["git", "-C", str(DOTFILES_DIR)] + list(cmd), packages = {}
capture_output=capture,
text=True,
)
shared = flow_dir / RESERVED_SHARED
if shared.is_dir():
for pkg in sorted(shared.iterdir()):
if pkg.is_dir() and not pkg.name.startswith("."):
packages[pkg.name] = pkg
def _pull_dotfiles(ctx: FlowContext, *, rebase: bool = True) -> None: if profile:
pull_cmd = ["pull"] profile_dir = flow_dir / profile
if rebase: if profile_dir.is_dir():
pull_cmd.append("--rebase") for pkg in sorted(profile_dir.iterdir()):
if pkg.is_dir() and not pkg.name.startswith("."):
packages[pkg.name] = pkg
strategy = "with rebase" if rebase else "without rebase" return packages
ctx.console.info(f"Pulling latest dotfiles ({strategy})...")
result = _run_dotfiles_git(*pull_cmd, capture=True)
if result.returncode != 0:
raise RuntimeError(f"Git pull failed: {result.stderr.strip()}")
output = result.stdout.strip()
if output:
print(output)
ctx.console.success("Dotfiles synced.")
def _find_package_dir(package_name: str, dotfiles_dir: Path = DOTFILES_DIR) -> Optional[Path]: def _find_package_dir(package_name: str, dotfiles_dir: Path = DOTFILES_DIR) -> Optional[Path]:
common_dir = dotfiles_dir / "common" / package_name flow_dir = _flow_config_dir(dotfiles_dir)
if common_dir.exists():
return common_dir
profile_dirs = list((dotfiles_dir / "profiles").glob(f"*/{package_name}")) shared_dir = flow_dir / RESERVED_SHARED / package_name
if profile_dirs: if shared_dir.exists():
return profile_dirs[0] return shared_dir
for profile in _list_profiles(flow_dir):
profile_pkg = flow_dir / profile / package_name
if profile_pkg.exists():
return profile_pkg
return None return None
@@ -209,10 +287,313 @@ def _resolve_edit_target(target: str, dotfiles_dir: Path = DOTFILES_DIR) -> Opti
return None return None
def _ensure_dotfiles_dir(ctx: FlowContext):
if not DOTFILES_DIR.exists():
ctx.console.error(f"Dotfiles not found at {DOTFILES_DIR}. Run 'flow dotfiles init' first.")
sys.exit(1)
def _ensure_flow_dir(ctx: FlowContext):
_ensure_dotfiles_dir(ctx)
flow_dir = _flow_config_dir()
if not flow_dir.exists() or not flow_dir.is_dir():
ctx.console.error(f"Dotfiles repository not found at {flow_dir}")
sys.exit(1)
def _run_dotfiles_git(*cmd, capture: bool = True) -> subprocess.CompletedProcess:
return subprocess.run(
["git", "-C", str(DOTFILES_DIR)] + list(cmd),
capture_output=capture,
text=True,
)
def _pull_dotfiles(ctx: FlowContext, *, rebase: bool = True) -> None:
pull_cmd = ["pull"]
if rebase:
pull_cmd.append("--rebase")
strategy = "with rebase" if rebase else "without rebase"
ctx.console.info(f"Pulling latest dotfiles ({strategy})...")
result = _run_dotfiles_git(*pull_cmd, capture=True)
if result.returncode != 0:
raise RuntimeError(f"Git pull failed: {result.stderr.strip()}")
output = result.stdout.strip()
if output:
print(output)
ctx.console.success("Dotfiles synced.")
def _resolve_profile(ctx: FlowContext, requested: Optional[str]) -> Optional[str]:
flow_dir = _flow_config_dir()
profiles = _list_profiles(flow_dir)
if requested:
if requested not in profiles:
raise RuntimeError(f"Profile not found: {requested}")
return requested
if len(profiles) == 1:
return profiles[0]
if len(profiles) > 1:
raise RuntimeError(f"Multiple profiles available. Use --profile: {', '.join(profiles)}")
return None
def _is_in_home(path: Path, home: Path) -> bool:
try:
path.relative_to(home)
return True
except ValueError:
return False
def _run_sudo(cmd: List[str], *, dry_run: bool = False) -> None:
if dry_run:
print(" " + " ".join(shlex.quote(part) for part in (["sudo"] + cmd)))
return
subprocess.run(["sudo"] + cmd, check=True)
def _remove_target(path: Path, *, use_sudo: bool, dry_run: bool) -> None:
if not (path.exists() or path.is_symlink()):
return
if path.is_dir() and not path.is_symlink():
raise RuntimeError(f"Cannot overwrite directory: {path}")
if use_sudo:
_run_sudo(["rm", "-f", str(path)], dry_run=dry_run)
return
if dry_run:
print(f" REMOVE: {path}")
return
path.unlink()
def _same_symlink(target: Path, source: Path) -> bool:
if not target.is_symlink():
return False
return target.resolve(strict=False) == source.resolve(strict=False)
def _collect_home_specs(
flow_dir: Path,
home: Path,
profile: Optional[str],
skip: Set[str],
package_filter: Optional[Set[str]],
) -> Dict[Path, LinkSpec]:
desired: Dict[Path, LinkSpec] = {}
if RESERVED_SHARED not in skip:
shared_dir = flow_dir / RESERVED_SHARED
if shared_dir.is_dir():
for pkg_dir in sorted(shared_dir.iterdir()):
if not pkg_dir.is_dir() or pkg_dir.name.startswith("."):
continue
if package_filter and pkg_dir.name not in package_filter:
continue
if pkg_dir.name in skip:
continue
package_name = f"{RESERVED_SHARED}/{pkg_dir.name}"
for src, rel in _walk_package(pkg_dir):
_insert_spec(
desired,
target=home / rel,
source=src,
package=package_name,
)
if profile and "_profile" not in skip:
profile_dir = flow_dir / profile
if profile_dir.is_dir():
for pkg_dir in sorted(profile_dir.iterdir()):
if not pkg_dir.is_dir() or pkg_dir.name.startswith("."):
continue
if package_filter and pkg_dir.name not in package_filter:
continue
if pkg_dir.name in skip:
continue
package_name = f"{profile}/{pkg_dir.name}"
for src, rel in _walk_package(pkg_dir):
_insert_spec(
desired,
target=home / rel,
source=src,
package=package_name,
)
return desired
def _collect_root_specs(flow_dir: Path, skip: Set[str], include_root: bool) -> Dict[Path, LinkSpec]:
desired: Dict[Path, LinkSpec] = {}
if not include_root or RESERVED_ROOT in skip:
return desired
root_dir = flow_dir / RESERVED_ROOT
if not root_dir.is_dir():
return desired
for root_pkg_dir in sorted(root_dir.iterdir()):
if not root_pkg_dir.is_dir() or root_pkg_dir.name.startswith("."):
continue
for src, rel in _walk_package(root_pkg_dir):
target = Path("/") / rel
_insert_spec(
desired,
target=target,
source=src,
package=f"{RESERVED_ROOT}/{root_pkg_dir.name}",
)
return desired
def _validate_conflicts(
desired: Dict[Path, LinkSpec],
current: Dict[Path, LinkSpec],
force: bool,
) -> List[str]:
conflicts: List[str] = []
for target, spec in desired.items():
if not (target.exists() or target.is_symlink()):
continue
if _same_symlink(target, spec.source):
continue
if target in current:
continue
if target.is_dir() and not target.is_symlink():
conflicts.append(f"Conflict: {target} is a directory")
continue
if not force:
conflicts.append(f"Conflict: {target} already exists and is not managed by flow")
return conflicts
def _apply_link_spec(spec: LinkSpec, *, copy: bool, dry_run: bool) -> bool:
use_sudo = _is_root_package(spec.package)
if copy and use_sudo:
print(f" SKIP COPY (root target): {spec.target}")
return False
if use_sudo:
_run_sudo(["mkdir", "-p", str(spec.target.parent)], dry_run=dry_run)
_run_sudo(["ln", "-sfn", str(spec.source), str(spec.target)], dry_run=dry_run)
return True
if dry_run:
if copy:
print(f" COPY: {spec.source} -> {spec.target}")
else:
print(f" LINK: {spec.target} -> {spec.source}")
return True
spec.target.parent.mkdir(parents=True, exist_ok=True)
if copy:
shutil.copy2(spec.source, spec.target)
return True
spec.target.symlink_to(spec.source)
return True
def _sync_to_desired(
ctx: FlowContext,
desired: Dict[Path, LinkSpec],
*,
force: bool,
dry_run: bool,
copy: bool,
) -> None:
current = _load_link_specs_from_state()
conflicts = _validate_conflicts(desired, current, force)
if conflicts:
for conflict in conflicts:
ctx.console.error(conflict)
if not force:
raise RuntimeError("Use --force to overwrite existing files")
for target in sorted(current.keys(), key=str):
if target in desired:
continue
use_sudo = _is_root_package(current[target].package) or not _is_in_home(target, Path.home())
_remove_target(target, use_sudo=use_sudo, dry_run=dry_run)
del current[target]
for target in sorted(desired.keys(), key=str):
spec = desired[target]
if _same_symlink(target, spec.source):
current[target] = spec
continue
exists = target.exists() or target.is_symlink()
if exists:
use_sudo = _is_root_package(spec.package) or not _is_in_home(target, Path.home())
_remove_target(target, use_sudo=use_sudo, dry_run=dry_run)
applied = _apply_link_spec(spec, copy=copy, dry_run=dry_run)
if applied:
current[target] = spec
if not dry_run:
_save_link_specs_to_state(current)
def _desired_links_for_profile(
ctx: FlowContext,
profile: Optional[str],
package_filter: Optional[Set[str]],
) -> Dict[Path, LinkSpec]:
flow_dir = _flow_config_dir()
home = Path.home()
skip = _profile_skip_set(ctx, profile)
include_root = package_filter is None or RESERVED_ROOT in package_filter
effective_filter = None
if package_filter is not None:
effective_filter = set(package_filter)
effective_filter.discard(RESERVED_ROOT)
if not effective_filter:
effective_filter = set()
home_specs = _collect_home_specs(flow_dir, home, profile, skip, effective_filter)
root_specs = _collect_root_specs(flow_dir, skip, include_root)
combined = {}
combined.update(home_specs)
for target, spec in root_specs.items():
_insert_spec(
combined,
target=target,
source=spec.source,
package=spec.package,
)
return combined
def run_init(ctx: FlowContext, args): def run_init(ctx: FlowContext, args):
repo_url = args.repo or ctx.config.dotfiles_url repo_url = args.repo or ctx.config.dotfiles_url
if not repo_url: if not repo_url:
ctx.console.error("No dotfiles repository URL. Set it in config or pass --repo.") ctx.console.error("No dotfiles repository URL. Set it in YAML config or pass --repo.")
sys.exit(1) sys.exit(1)
if DOTFILES_DIR.exists(): if DOTFILES_DIR.exists():
@@ -228,165 +609,108 @@ def run_init(ctx: FlowContext, args):
def run_link(ctx: FlowContext, args): def run_link(ctx: FlowContext, args):
_ensure_dotfiles_dir(ctx) _ensure_flow_dir(ctx)
home = Path.home()
packages = _discover_packages(DOTFILES_DIR, args.profile)
# Filter to requested packages
if args.packages:
packages = {k: v for k, v in packages.items() if k in args.packages}
missing = set(args.packages) - set(packages.keys())
if missing:
ctx.console.warn(f"Packages not found: {', '.join(missing)}")
if not packages:
ctx.console.error("No valid packages selected")
sys.exit(1)
# Build current link tree from state
state = _load_state()
try: try:
tree = LinkTree.from_state(state) profile = _resolve_profile(ctx, args.profile)
except RuntimeError as e: except RuntimeError as e:
ctx.console.error(str(e)) ctx.console.error(str(e))
sys.exit(1) sys.exit(1)
folder = TreeFolder(tree)
all_operations = [] package_filter = set(args.packages) if args.packages else None
copied_count = 0 desired = _desired_links_for_profile(ctx, profile, package_filter)
for pkg_name, source_dir in packages.items(): if not desired:
ctx.console.info(f"[{pkg_name}]") ctx.console.warn("No link targets found for selected profile/filters")
for src, dst in _walk_package(source_dir, home):
if args.copy:
if dst.exists() or dst.is_symlink():
if not args.force:
ctx.console.warn(f" Skipped (exists): {dst}")
continue
if dst.is_dir() and not dst.is_symlink():
ctx.console.error(f"Cannot overwrite directory with --copy: {dst}")
continue
if not args.dry_run:
dst.unlink()
if args.dry_run:
print(f" COPY: {src} -> {dst}")
else:
dst.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(src, dst)
print(f" Copied: {src} -> {dst}")
copied_count += 1
continue
ops = folder.plan_link(src, dst, pkg_name)
all_operations.extend(ops)
if args.copy:
if args.dry_run:
return
ctx.console.success(f"Copied {copied_count} item(s)")
return return
# Conflict detection (two-phase) try:
conflicts = folder.detect_conflicts(all_operations) _sync_to_desired(
if conflicts and not args.force: ctx,
for conflict in conflicts: desired,
ctx.console.error(conflict) force=args.force,
ctx.console.error("\nUse --force to overwrite existing files") dry_run=args.dry_run,
copy=args.copy,
)
except RuntimeError as e:
ctx.console.error(str(e))
sys.exit(1) sys.exit(1)
# Handle force mode: remove conflicting targets
if args.force and not args.dry_run:
for op in all_operations:
if op.type != "create_symlink":
continue
if not (op.target.exists() or op.target.is_symlink()):
continue
if op.target in tree.links:
continue
if op.target.is_dir() and not op.target.is_symlink():
ctx.console.error(f"Cannot overwrite directory with --force: {op.target}")
sys.exit(1)
op.target.unlink()
# Execute operations
if args.dry_run: if args.dry_run:
ctx.console.info("\nPlanned operations:") return
for op in all_operations:
print(str(op)) ctx.console.success(f"Linked {len(desired)} item(s)")
else:
folder.execute_operations(all_operations, dry_run=False)
state = folder.to_state() def _package_match(package_id: str, filters: Set[str]) -> bool:
_save_state(state) if package_id in filters:
ctx.console.success(f"Linked {len(all_operations)} item(s)") return True
# Allow users to pass just package basename (e.g. zsh)
base = package_id.split("/", 1)[-1]
return base in filters
def run_unlink(ctx: FlowContext, args): def run_unlink(ctx: FlowContext, args):
state = _load_state() try:
links_by_package = state.get("links", {}) current = _load_link_specs_from_state()
if not links_by_package: except RuntimeError as e:
ctx.console.error(str(e))
sys.exit(1)
if not current:
ctx.console.info("No linked dotfiles found.") ctx.console.info("No linked dotfiles found.")
return return
packages_to_unlink = args.packages if args.packages else list(links_by_package.keys()) filters = set(args.packages) if args.packages else None
removed = 0 removed = 0
for pkg_name in packages_to_unlink: for target in sorted(list(current.keys()), key=str):
links = links_by_package.get(pkg_name, {}) spec = current[target]
if not links: if filters and not _package_match(spec.package, filters):
continue continue
ctx.console.info(f"[{pkg_name}]") use_sudo = _is_root_package(spec.package) or not _is_in_home(target, Path.home())
for dst_str in list(links.keys()): try:
dst = Path(dst_str) _remove_target(target, use_sudo=use_sudo, dry_run=False)
if dst.is_symlink(): except RuntimeError as e:
dst.unlink() ctx.console.warn(str(e))
print(f" Removed: {dst}") continue
removed += 1
elif dst.exists():
ctx.console.warn(f" Not a symlink, skipping: {dst}")
else:
print(f" Already gone: {dst}")
links_by_package.pop(pkg_name, None) removed += 1
del current[target]
_save_state(state) _save_link_specs_to_state(current)
ctx.console.success(f"Removed {removed} symlink(s)") ctx.console.success(f"Removed {removed} symlink(s)")
def run_status(ctx: FlowContext, args): def run_status(ctx: FlowContext, args):
state = _load_state() try:
links_by_package = state.get("links", {}) current = _load_link_specs_from_state()
if not links_by_package: except RuntimeError as e:
ctx.console.error(str(e))
sys.exit(1)
if not current:
ctx.console.info("No linked dotfiles.") ctx.console.info("No linked dotfiles.")
return return
for pkg_name, links in links_by_package.items(): grouped: Dict[str, List[LinkSpec]] = {}
ctx.console.info(f"[{pkg_name}]") for spec in current.values():
for dst_str, link_info in links.items(): grouped.setdefault(spec.package, []).append(spec)
dst = Path(dst_str)
if not isinstance(link_info, dict) or "source" not in link_info: for package in sorted(grouped.keys()):
ctx.console.error( ctx.console.info(f"[{package}]")
"Unsupported linked state format. Remove linked.json and relink dotfiles." for spec in sorted(grouped[package], key=lambda s: str(s.target)):
) if spec.target.is_symlink():
sys.exit(1) if _same_symlink(spec.target, spec.source):
print(f" OK: {spec.target} -> {spec.source}")
src_str = link_info["source"]
is_dir_link = bool(link_info.get("is_directory_link", False))
link_type = "FOLDED" if is_dir_link else "OK"
if dst.is_symlink():
target = os.readlink(dst)
if target == src_str or str(dst.resolve()) == str(Path(src_str).resolve()):
print(f" {link_type}: {dst} -> {src_str}")
else: else:
print(f" CHANGED: {dst} -> {target} (expected {src_str})") print(f" CHANGED: {spec.target}")
elif dst.exists(): elif spec.target.exists():
print(f" NOT SYMLINK: {dst}") print(f" NOT SYMLINK: {spec.target}")
else: else:
print(f" BROKEN: {dst} (missing)") print(f" BROKEN: {spec.target} (missing)")
def run_sync(ctx: FlowContext, args): def run_sync(ctx: FlowContext, args):
@@ -448,15 +772,11 @@ def run_repo_push(ctx: FlowContext, args):
def run_relink(ctx: FlowContext, args): def run_relink(ctx: FlowContext, args):
"""Refresh symlinks after changes (unlink + link).""" _ensure_flow_dir(ctx)
_ensure_dotfiles_dir(ctx)
# First unlink
ctx.console.info("Unlinking current symlinks...") ctx.console.info("Unlinking current symlinks...")
run_unlink(ctx, args) run_unlink(ctx, args)
# Then link again — set defaults for attributes that run_link expects
# but the relink parser doesn't define.
args.copy = False args.copy = False
args.force = False args.force = False
args.dry_run = False args.dry_run = False
@@ -465,29 +785,31 @@ def run_relink(ctx: FlowContext, args):
def run_clean(ctx: FlowContext, args): def run_clean(ctx: FlowContext, args):
"""Remove broken symlinks.""" try:
state = _load_state() current = _load_link_specs_from_state()
if not state.get("links"): except RuntimeError as e:
ctx.console.error(str(e))
sys.exit(1)
if not current:
ctx.console.info("No linked dotfiles found.") ctx.console.info("No linked dotfiles found.")
return return
removed = 0 removed = 0
for pkg_name, links in state["links"].items(): for target in sorted(list(current.keys()), key=str):
for dst_str in list(links.keys()): if not target.is_symlink() or target.exists():
dst = Path(dst_str) continue
# Check if symlink is broken if args.dry_run:
if dst.is_symlink() and not dst.exists(): print(f"Would remove broken symlink: {target}")
if args.dry_run: else:
print(f"Would remove broken symlink: {dst}") use_sudo = _is_root_package(current[target].package) or not _is_in_home(target, Path.home())
else: _remove_target(target, use_sudo=use_sudo, dry_run=False)
dst.unlink() del current[target]
print(f"Removed broken symlink: {dst}") removed += 1
del links[dst_str]
removed += 1
if not args.dry_run: if not args.dry_run:
_save_state(state) _save_link_specs_to_state(current)
if removed > 0: if removed > 0:
ctx.console.success(f"Cleaned {removed} broken symlink(s)") ctx.console.success(f"Cleaned {removed} broken symlink(s)")
@@ -496,7 +818,6 @@ def run_clean(ctx: FlowContext, args):
def run_edit(ctx: FlowContext, args): def run_edit(ctx: FlowContext, args):
"""Edit package config with auto-commit workflow."""
_ensure_dotfiles_dir(ctx) _ensure_dotfiles_dir(ctx)
target_name = args.target target_name = args.target
@@ -505,24 +826,20 @@ def run_edit(ctx: FlowContext, args):
ctx.console.error(f"No matching package or path found for: {target_name}") ctx.console.error(f"No matching package or path found for: {target_name}")
sys.exit(1) sys.exit(1)
# Git pull before editing
ctx.console.info("Pulling latest changes...") ctx.console.info("Pulling latest changes...")
result = _run_dotfiles_git("pull", "--rebase", capture=True) result = _run_dotfiles_git("pull", "--rebase", capture=True)
if result.returncode != 0: if result.returncode != 0:
ctx.console.warn(f"Git pull failed: {result.stderr.strip()}") ctx.console.warn(f"Git pull failed: {result.stderr.strip()}")
# Open editor
editor = os.environ.get("EDITOR", "vim") editor = os.environ.get("EDITOR", "vim")
ctx.console.info(f"Opening {edit_target} in {editor}...") ctx.console.info(f"Opening {edit_target} in {editor}...")
edit_result = subprocess.run(shlex.split(editor) + [str(edit_target)]) edit_result = subprocess.run(shlex.split(editor) + [str(edit_target)])
if edit_result.returncode != 0: if edit_result.returncode != 0:
ctx.console.warn(f"Editor exited with status {edit_result.returncode}") ctx.console.warn(f"Editor exited with status {edit_result.returncode}")
# Check for changes
result = _run_dotfiles_git("status", "--porcelain", capture=True) result = _run_dotfiles_git("status", "--porcelain", capture=True)
if result.stdout.strip() and not args.no_commit: if result.stdout.strip() and not args.no_commit:
# Auto-commit changes
ctx.console.info("Changes detected, committing...") ctx.console.info("Changes detected, committing...")
subprocess.run(["git", "-C", str(DOTFILES_DIR), "add", "."], check=True) subprocess.run(["git", "-C", str(DOTFILES_DIR), "add", "."], check=True)
subprocess.run( subprocess.run(
@@ -530,12 +847,11 @@ def run_edit(ctx: FlowContext, args):
check=True, check=True,
) )
# Ask before pushing
try: try:
response = input("Push changes to remote? [Y/n] ") response = input("Push changes to remote? [Y/n] ")
except (EOFError, KeyboardInterrupt): except (EOFError, KeyboardInterrupt):
response = "n" response = "n"
print() # newline after ^C / EOF print()
if response.lower() != "n": if response.lower() != "n":
subprocess.run(["git", "-C", str(DOTFILES_DIR), "push"], check=True) subprocess.run(["git", "-C", str(DOTFILES_DIR), "push"], check=True)
ctx.console.success("Changes committed and pushed") ctx.console.success("Changes committed and pushed")

View File

@@ -1,31 +1,27 @@
"""flow package — binary package management from manifest definitions.""" """flow package — package management from unified manifest definitions."""
import json import json
import subprocess
import sys import sys
from typing import Any, Dict, Optional, Tuple from typing import Any, Dict
from flow.commands.bootstrap import _get_package_catalog, _install_binary_package
from flow.core.config import FlowContext from flow.core.config import FlowContext
from flow.core.paths import INSTALLED_STATE from flow.core.paths import INSTALLED_STATE
from flow.core.variables import substitute_template
def register(subparsers): def register(subparsers):
p = subparsers.add_parser("package", aliases=["pkg"], help="Manage binary packages") p = subparsers.add_parser("package", aliases=["pkg"], help="Manage packages")
sub = p.add_subparsers(dest="package_command") sub = p.add_subparsers(dest="package_command")
# install
inst = sub.add_parser("install", help="Install packages from manifest") inst = sub.add_parser("install", help="Install packages from manifest")
inst.add_argument("packages", nargs="+", help="Package names to install") inst.add_argument("packages", nargs="+", help="Package names to install")
inst.add_argument("--dry-run", action="store_true", help="Show what would be done") inst.add_argument("--dry-run", action="store_true", help="Show what would be done")
inst.set_defaults(handler=run_install) inst.set_defaults(handler=run_install)
# list
ls = sub.add_parser("list", help="List installed and available packages") ls = sub.add_parser("list", help="List installed and available packages")
ls.add_argument("--all", action="store_true", help="Show all available packages") ls.add_argument("--all", action="store_true", help="Show all available packages")
ls.set_defaults(handler=run_list) ls.set_defaults(handler=run_list)
# remove
rm = sub.add_parser("remove", help="Remove installed packages") rm = sub.add_parser("remove", help="Remove installed packages")
rm.add_argument("packages", nargs="+", help="Package names to remove") rm.add_argument("packages", nargs="+", help="Package names to remove")
rm.set_defaults(handler=run_remove) rm.set_defaults(handler=run_remove)
@@ -35,53 +31,24 @@ def register(subparsers):
def _load_installed() -> dict: def _load_installed() -> dict:
if INSTALLED_STATE.exists(): if INSTALLED_STATE.exists():
with open(INSTALLED_STATE) as f: with open(INSTALLED_STATE, "r", encoding="utf-8") as handle:
return json.load(f) return json.load(handle)
return {} return {}
def _save_installed(state: dict): def _save_installed(state: dict):
INSTALLED_STATE.parent.mkdir(parents=True, exist_ok=True) INSTALLED_STATE.parent.mkdir(parents=True, exist_ok=True)
with open(INSTALLED_STATE, "w") as f: with open(INSTALLED_STATE, "w", encoding="utf-8") as handle:
json.dump(state, f, indent=2) json.dump(state, handle, indent=2)
def _get_definitions(ctx: FlowContext) -> dict: def _get_definitions(ctx: FlowContext) -> Dict[str, Dict[str, Any]]:
"""Get package definitions from manifest (binaries section).""" return _get_package_catalog(ctx)
return ctx.manifest.get("binaries", {})
def _resolve_download_url(
pkg_def: Dict[str, Any],
platform_str: str,
) -> Optional[Tuple[str, Dict[str, str]]]:
"""Build GitHub release download URL from package definition."""
source = pkg_def.get("source", "")
if not source.startswith("github:"):
return None
owner_repo = source[len("github:"):]
version = pkg_def.get("version", "")
asset_pattern = pkg_def.get("asset-pattern", "")
platform_map = pkg_def.get("platform-map", {})
mapping = platform_map.get(platform_str)
if not mapping:
return None
# Build template context
template_ctx = {**mapping, "version": version}
asset = substitute_template(asset_pattern, template_ctx)
url = f"https://github.com/{owner_repo}/releases/download/v{version}/{asset}"
template_ctx["downloadUrl"] = url
return url, template_ctx
def run_install(ctx: FlowContext, args): def run_install(ctx: FlowContext, args):
definitions = _get_definitions(ctx) definitions = _get_definitions(ctx)
installed = _load_installed() installed = _load_installed()
platform_str = ctx.platform.platform
had_error = False had_error = False
for pkg_name in args.packages: for pkg_name in args.packages:
@@ -91,48 +58,33 @@ def run_install(ctx: FlowContext, args):
had_error = True had_error = True
continue continue
ctx.console.info(f"Installing {pkg_name} v{pkg_def.get('version', '?')}...") pkg_type = pkg_def.get("type", "pkg")
if pkg_type != "binary":
result = _resolve_download_url(pkg_def, platform_str) ctx.console.error(
if not result: f"'flow package install' supports binary packages only. "
ctx.console.error(f"No download available for {pkg_name} on {platform_str}") f"'{pkg_name}' is type '{pkg_type}'."
)
had_error = True had_error = True
continue continue
url, template_ctx = result ctx.console.info(f"Installing {pkg_name}...")
try:
if args.dry_run: _install_binary_package(ctx, pkg_def, extra_env={}, dry_run=args.dry_run)
ctx.console.info(f"[{pkg_name}] Would download: {url}") except RuntimeError as e:
install_script = pkg_def.get("install-script", "") ctx.console.error(str(e))
if install_script:
ctx.console.info(f"[{pkg_name}] Would run install script")
continue
# Run install script with template vars resolved
install_script = pkg_def.get("install-script", "")
if not install_script:
ctx.console.error(f"Package '{pkg_name}' has no install-script")
had_error = True had_error = True
continue continue
resolved_script = substitute_template(install_script, template_ctx) if not args.dry_run:
ctx.console.info(f"Running install script for {pkg_name}...") installed[pkg_name] = {
proc = subprocess.run( "version": str(pkg_def.get("version", "")),
resolved_script, shell=True, "type": pkg_type,
capture_output=False, }
) ctx.console.success(f"Installed {pkg_name}")
if proc.returncode != 0:
ctx.console.error(f"Install script failed for {pkg_name}")
had_error = True
continue
installed[pkg_name] = { if not args.dry_run:
"version": pkg_def.get("version", ""), _save_installed(installed)
"source": pkg_def.get("source", ""),
}
ctx.console.success(f"Installed {pkg_name} v{pkg_def.get('version', '')}")
_save_installed(installed)
if had_error: if had_error:
sys.exit(1) sys.exit(1)
@@ -141,26 +93,24 @@ def run_list(ctx: FlowContext, args):
definitions = _get_definitions(ctx) definitions = _get_definitions(ctx)
installed = _load_installed() installed = _load_installed()
headers = ["PACKAGE", "INSTALLED", "AVAILABLE"] headers = ["PACKAGE", "TYPE", "INSTALLED", "AVAILABLE"]
rows = [] rows = []
if args.all: if args.all:
# Show all defined packages
if not definitions: if not definitions:
ctx.console.info("No packages defined in manifest.") ctx.console.info("No packages defined in manifest.")
return return
for name, pkg_def in sorted(definitions.items()): for name, pkg_def in sorted(definitions.items()):
inst_ver = installed.get(name, {}).get("version", "-") inst_ver = installed.get(name, {}).get("version", "-")
avail_ver = pkg_def.get("version", "?") avail_ver = str(pkg_def.get("version", "")) or "-"
rows.append([name, inst_ver, avail_ver]) rows.append([name, str(pkg_def.get("type", "pkg")), inst_ver, avail_ver])
else: else:
# Show installed only
if not installed: if not installed:
ctx.console.info("No packages installed.") ctx.console.info("No packages installed.")
return return
for name, info in sorted(installed.items()): for name, info in sorted(installed.items()):
avail = definitions.get(name, {}).get("version", "?") avail = str(definitions.get(name, {}).get("version", "")) or "-"
rows.append([name, info.get("version", "?"), avail]) rows.append([name, str(info.get("type", "?")), str(info.get("version", "?")), avail])
ctx.console.table(headers, rows) ctx.console.table(headers, rows)
@@ -173,9 +123,10 @@ def run_remove(ctx: FlowContext, args):
ctx.console.warn(f"Package not installed: {pkg_name}") ctx.console.warn(f"Package not installed: {pkg_name}")
continue continue
# Remove from installed state
del installed[pkg_name] del installed[pkg_name]
ctx.console.success(f"Removed {pkg_name} from installed packages") ctx.console.success(f"Removed {pkg_name} from installed packages")
ctx.console.warn("Note: binary files were not automatically deleted. Remove manually if needed.") ctx.console.warn(
"Note: installed files were not automatically deleted. Remove manually if needed."
)
_save_installed(installed) _save_installed(installed)

View File

@@ -1,14 +1,13 @@
"""Configuration loading (INI config + YAML manifest) and FlowContext.""" """Configuration loading (merged YAML) and FlowContext."""
import configparser
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
import yaml import yaml
from flow.core.console import ConsoleLogger
from flow.core import paths from flow.core import paths
from flow.core.console import ConsoleLogger
from flow.core.platform import PlatformInfo from flow.core.platform import PlatformInfo
@@ -31,8 +30,17 @@ class AppConfig:
targets: List[TargetConfig] = field(default_factory=list) targets: List[TargetConfig] = field(default_factory=list)
def _get_value(mapping: Any, *keys: str, default: Any = None) -> Any:
if not isinstance(mapping, dict):
return default
for key in keys:
if key in mapping:
return mapping[key]
return default
def _parse_target_config(key: str, value: str) -> Optional[TargetConfig]: def _parse_target_config(key: str, value: str) -> Optional[TargetConfig]:
"""Parse a target line from config. """Parse a target line from config-like syntax.
Supported formats: Supported formats:
1) namespace = platform ssh_host [ssh_identity] 1) namespace = platform ssh_host [ssh_identity]
@@ -66,83 +74,218 @@ def _parse_target_config(key: str, value: str) -> Optional[TargetConfig]:
) )
def load_config(path: Optional[Path] = None) -> AppConfig: def _list_yaml_files(directory: Path) -> List[Path]:
"""Load INI config file into AppConfig with cascading priority. if not directory.exists() or not directory.is_dir():
return []
Priority: files = []
1. Dotfiles repo (self-hosted): ~/.local/share/devflow/dotfiles/flow/.config/flow/config for child in directory.iterdir():
2. Local override: ~/.config/devflow/config if not child.is_file():
3. Empty fallback continue
""" if child.suffix.lower() in {".yaml", ".yml"}:
cfg = AppConfig() files.append(child)
if path is None: return sorted(files, key=lambda p: p.name)
# Priority 1: Check dotfiles repo for self-hosted config
if paths.DOTFILES_CONFIG.exists():
path = paths.DOTFILES_CONFIG
# Priority 2: Fall back to local config
else:
path = paths.CONFIG_FILE
assert path is not None
if not path.exists():
return cfg
parser = configparser.ConfigParser()
parser.read(path)
if parser.has_section("repository"):
cfg.dotfiles_url = parser.get("repository", "dotfiles_url", fallback=cfg.dotfiles_url)
cfg.dotfiles_branch = parser.get("repository", "dotfiles_branch", fallback=cfg.dotfiles_branch)
if parser.has_section("paths"):
cfg.projects_dir = parser.get("paths", "projects_dir", fallback=cfg.projects_dir)
if parser.has_section("defaults"):
cfg.container_registry = parser.get("defaults", "container_registry", fallback=cfg.container_registry)
cfg.container_tag = parser.get("defaults", "container_tag", fallback=cfg.container_tag)
cfg.tmux_session = parser.get("defaults", "tmux_session", fallback=cfg.tmux_session)
if parser.has_section("targets"):
for key in parser.options("targets"):
raw_value = parser.get("targets", key)
tc = _parse_target_config(key, raw_value)
if tc is not None:
cfg.targets.append(tc)
return cfg
def load_manifest(path: Optional[Path] = None) -> Dict[str, Any]: def _load_yaml_file(path: Path) -> Dict[str, Any]:
"""Load YAML manifest file with cascading priority. try:
with open(path, "r", encoding="utf-8") as handle:
data = yaml.safe_load(handle)
except yaml.YAMLError as e:
raise RuntimeError(f"Invalid YAML in {path}: {e}") from e
Priority: if data is None:
1. Dotfiles repo (self-hosted): ~/.local/share/devflow/dotfiles/flow/.config/flow/manifest.yaml return {}
2. Local override: ~/.config/devflow/manifest.yaml
3. Empty fallback
"""
if path is None:
# Priority 1: Check dotfiles repo for self-hosted manifest
if paths.DOTFILES_MANIFEST.exists():
path = paths.DOTFILES_MANIFEST
# Priority 2: Fall back to local manifest
else:
path = paths.MANIFEST_FILE
assert path is not None if not isinstance(data, dict):
raise RuntimeError(f"YAML file must contain a mapping at root: {path}")
return data
def _load_merged_yaml(directory: Path) -> Dict[str, Any]:
merged: Dict[str, Any] = {}
for file_path in _list_yaml_files(directory):
merged.update(_load_yaml_file(file_path))
return merged
def _resolve_default_yaml_root() -> Path:
# Priority 1: self-hosted config from linked dotfiles
if paths.DOTFILES_FLOW_CONFIG.exists() and _list_yaml_files(paths.DOTFILES_FLOW_CONFIG):
return paths.DOTFILES_FLOW_CONFIG
# Priority 2: local config directory
return paths.CONFIG_DIR
def _load_yaml_source(path: Path) -> Dict[str, Any]:
if not path.exists(): if not path.exists():
return {} return {}
try: if path.is_file():
with open(path, "r") as f: return _load_yaml_file(path)
data = yaml.safe_load(f)
except yaml.YAMLError as e: if path.is_dir():
raise RuntimeError(f"Invalid YAML in {path}: {e}") from e return _load_merged_yaml(path)
return {}
def _parse_targets(raw_targets: Any) -> List[TargetConfig]:
targets: List[TargetConfig] = []
if isinstance(raw_targets, dict):
for key, value in raw_targets.items():
if isinstance(value, str):
parsed = _parse_target_config(key, value)
if parsed is not None:
targets.append(parsed)
continue
if not isinstance(value, dict):
continue
namespace_from_key = key
platform_from_key = None
if "@" in key:
namespace_from_key, platform_from_key = key.split("@", 1)
namespace = str(
_get_value(
value,
"namespace",
default=namespace_from_key,
)
)
platform = str(
_get_value(
value,
"platform",
default=platform_from_key,
)
)
ssh_host = _get_value(value, "ssh_host", "ssh-host", "host", default="")
ssh_identity = _get_value(value, "ssh_identity", "ssh-identity", "identity")
if not namespace or not platform or not ssh_host:
continue
targets.append(
TargetConfig(
namespace=namespace,
platform=platform,
ssh_host=str(ssh_host),
ssh_identity=str(ssh_identity) if ssh_identity else None,
)
)
elif isinstance(raw_targets, list):
for item in raw_targets:
if not isinstance(item, dict):
continue
namespace = _get_value(item, "namespace")
platform = _get_value(item, "platform")
ssh_host = _get_value(item, "ssh_host", "ssh-host", "host")
ssh_identity = _get_value(item, "ssh_identity", "ssh-identity", "identity")
if not namespace or not platform or not ssh_host:
continue
targets.append(
TargetConfig(
namespace=str(namespace),
platform=str(platform),
ssh_host=str(ssh_host),
ssh_identity=str(ssh_identity) if ssh_identity else None,
)
)
return targets
def load_manifest(path: Optional[Path] = None) -> Dict[str, Any]:
"""Load merged YAML manifest/config data.
Default priority:
1) ~/.local/share/flow/dotfiles/_shared/flow/.config/flow/*.y[a]ml
2) ~/.config/flow/*.y[a]ml
"""
source = path if path is not None else _resolve_default_yaml_root()
assert source is not None
data = _load_yaml_source(source)
return data if isinstance(data, dict) else {} return data if isinstance(data, dict) else {}
def load_config(path: Optional[Path] = None) -> AppConfig:
"""Load merged YAML config into AppConfig."""
source = path if path is not None else _resolve_default_yaml_root()
assert source is not None
merged = _load_yaml_source(source)
cfg = AppConfig()
if not isinstance(merged, dict):
return cfg
repository = merged.get("repository") if isinstance(merged.get("repository"), dict) else {}
paths_section = merged.get("paths") if isinstance(merged.get("paths"), dict) else {}
defaults = merged.get("defaults") if isinstance(merged.get("defaults"), dict) else {}
cfg.dotfiles_url = str(
_get_value(
repository,
"dotfiles_url",
"dotfiles-url",
default=merged.get("dotfiles_url", cfg.dotfiles_url),
)
)
cfg.dotfiles_branch = str(
_get_value(
repository,
"dotfiles_branch",
"dotfiles-branch",
default=merged.get("dotfiles_branch", cfg.dotfiles_branch),
)
)
cfg.projects_dir = str(
_get_value(
paths_section,
"projects_dir",
"projects-dir",
default=merged.get("projects_dir", cfg.projects_dir),
)
)
cfg.container_registry = str(
_get_value(
defaults,
"container_registry",
"container-registry",
default=merged.get("container_registry", cfg.container_registry),
)
)
cfg.container_tag = str(
_get_value(
defaults,
"container_tag",
"container-tag",
default=merged.get("container_tag", cfg.container_tag),
)
)
cfg.tmux_session = str(
_get_value(
defaults,
"tmux_session",
"tmux-session",
default=merged.get("tmux_session", cfg.tmux_session),
)
)
cfg.targets = _parse_targets(merged.get("targets", {}))
return cfg
@dataclass @dataclass
class FlowContext: class FlowContext:
config: AppConfig config: AppConfig

View File

@@ -1,4 +1,4 @@
"""XDG-compliant path constants for DevFlow.""" """XDG-compliant path constants for flow."""
import os import os
from pathlib import Path from pathlib import Path
@@ -10,12 +10,12 @@ def _xdg(env_var: str, fallback: str) -> Path:
HOME = Path.home() HOME = Path.home()
CONFIG_DIR = _xdg("XDG_CONFIG_HOME", str(HOME / ".config")) / "devflow" CONFIG_DIR = _xdg("XDG_CONFIG_HOME", str(HOME / ".config")) / "flow"
DATA_DIR = _xdg("XDG_DATA_HOME", str(HOME / ".local" / "share")) / "devflow" DATA_DIR = _xdg("XDG_DATA_HOME", str(HOME / ".local" / "share")) / "flow"
STATE_DIR = _xdg("XDG_STATE_HOME", str(HOME / ".local" / "state")) / "devflow" STATE_DIR = _xdg("XDG_STATE_HOME", str(HOME / ".local" / "state")) / "flow"
MANIFEST_FILE = CONFIG_DIR / "manifest.yaml" MANIFEST_FILE = CONFIG_DIR / "manifest.yaml"
CONFIG_FILE = CONFIG_DIR / "config" CONFIG_FILE = CONFIG_DIR / "config.yaml"
DOTFILES_DIR = DATA_DIR / "dotfiles" DOTFILES_DIR = DATA_DIR / "dotfiles"
PACKAGES_DIR = DATA_DIR / "packages" PACKAGES_DIR = DATA_DIR / "packages"
@@ -25,10 +25,10 @@ PROJECTS_DIR = HOME / "projects"
LINKED_STATE = STATE_DIR / "linked.json" LINKED_STATE = STATE_DIR / "linked.json"
INSTALLED_STATE = STATE_DIR / "installed.json" INSTALLED_STATE = STATE_DIR / "installed.json"
# Self-hosted flow config paths (from dotfiles repo) # Self-hosted flow config path (from dotfiles repo)
DOTFILES_FLOW_CONFIG = DOTFILES_DIR / "flow" / ".config" / "flow" DOTFILES_FLOW_CONFIG = DOTFILES_DIR / "_shared" / "flow" / ".config" / "flow"
DOTFILES_MANIFEST = DOTFILES_FLOW_CONFIG / "manifest.yaml" DOTFILES_MANIFEST = DOTFILES_FLOW_CONFIG / "manifest.yaml"
DOTFILES_CONFIG = DOTFILES_FLOW_CONFIG / "config" DOTFILES_CONFIG = DOTFILES_FLOW_CONFIG / "config.yaml"
def ensure_dirs() -> None: def ensure_dirs() -> None:

View File

@@ -7,8 +7,8 @@ from dataclasses import dataclass
@dataclass @dataclass
class PlatformInfo: class PlatformInfo:
os: str = "linux" # "linux" or "macos" os: str = "linux" # "linux" or "macos"
arch: str = "amd64" # "amd64" or "arm64" arch: str = "x64" # "x64" or "arm64"
platform: str = "" # "linux-amd64", etc. platform: str = "" # "linux-x64", etc.
def __post_init__(self): def __post_init__(self):
if not self.platform: if not self.platform:
@@ -16,7 +16,7 @@ class PlatformInfo:
_OS_MAP = {"Darwin": "macos", "Linux": "linux"} _OS_MAP = {"Darwin": "macos", "Linux": "linux"}
_ARCH_MAP = {"x86_64": "amd64", "aarch64": "arm64", "arm64": "arm64"} _ARCH_MAP = {"x86_64": "x64", "amd64": "x64", "aarch64": "arm64", "arm64": "arm64"}
def detect_platform() -> PlatformInfo: def detect_platform() -> PlatformInfo:

View File

@@ -1,9 +1,9 @@
"""Variable substitution for $VAR/${VAR} and {{var}} templates.""" """Variable substitution for shell-style and template expressions."""
import os import os
import re import re
from pathlib import Path from pathlib import Path
from typing import Dict from typing import Any, Dict
def substitute(text: str, variables: Dict[str, str]) -> str: def substitute(text: str, variables: Dict[str, str]) -> str:
@@ -26,13 +26,36 @@ def substitute(text: str, variables: Dict[str, str]) -> str:
return pattern.sub(_replace, text) return pattern.sub(_replace, text)
def substitute_template(text: str, context: Dict[str, str]) -> str: def _resolve_template_value(expr: str, context: Dict[str, Any]) -> Any:
"""Replace {{key}} placeholders with values from context dict.""" if expr.startswith("env."):
env_key = expr.split(".", 1)[1]
env_ctx = context.get("env", {})
if isinstance(env_ctx, dict) and env_key in env_ctx:
return env_ctx[env_key]
return os.environ.get(env_key)
if expr in context:
return context[expr]
current: Any = context
for part in expr.split("."):
if not isinstance(current, dict) or part not in current:
return None
current = current[part]
return current
def substitute_template(text: str, context: Dict[str, Any]) -> str:
"""Replace {{expr}} placeholders with values from context dict."""
if not isinstance(text, str): if not isinstance(text, str):
return text return text
def _replace(match: re.Match[str]) -> str: def _replace(match: re.Match[str]) -> str:
key = match.group(1).strip() key = match.group(1).strip()
return context.get(key, match.group(0)) value = _resolve_template_value(key, context)
if value is None:
return match.group(0)
return str(value)
return re.sub(r"\{\{(\w+)\}\}", _replace, text) return re.sub(r"\{\{\s*([^{}]+?)\s*\}\}", _replace, text)

View File

@@ -1,12 +1,16 @@
"""Tests for flow.commands.bootstrap — action planning.""" """Tests for flow.commands.bootstrap helpers and schema behavior."""
import os
import pytest import pytest
from flow.commands.bootstrap import ( from flow.commands.bootstrap import (
_ensure_required_variables,
_get_profiles, _get_profiles,
_plan_actions, _normalize_profile_package_entry,
_resolve_package_manager, _resolve_package_manager,
_resolve_package_name, _resolve_package_spec,
_resolve_pkg_source_name,
) )
from flow.core.config import AppConfig, FlowContext from flow.core.config import AppConfig, FlowContext
from flow.core.console import ConsoleLogger from flow.core.console import ConsoleLogger
@@ -18,127 +22,28 @@ def ctx():
return FlowContext( return FlowContext(
config=AppConfig(), config=AppConfig(),
manifest={ manifest={
"binaries": { "packages": [
"neovim": { {
"version": "0.10.4", "name": "fd",
"source": "github:neovim/neovim", "type": "pkg",
"asset-pattern": "nvim-{{os}}-{{arch}}.tar.gz", "sources": {"apt": "fd-find", "dnf": "fd-find", "brew": "fd"},
"platform-map": {"linux-arm64": {"os": "linux", "arch": "arm64"}},
"install-script": "echo install",
}, },
}, {
"name": "neovim",
"type": "binary",
"source": "github:neovim/neovim",
"version": "0.10.4",
"asset-pattern": "nvim-{{os}}-{{arch}}.tar.gz",
"platform-map": {"linux-x64": {"os": "linux", "arch": "x64"}},
"install": {"bin": ["bin/nvim"]},
},
]
}, },
platform=PlatformInfo(os="linux", arch="arm64", platform="linux-arm64"), platform=PlatformInfo(os="linux", arch="x64", platform="linux-x64"),
console=ConsoleLogger(), console=ConsoleLogger(),
) )
def test_plan_empty_profile(ctx):
actions = _plan_actions(ctx, "test", {}, {})
assert actions == []
def test_plan_hostname(ctx):
actions = _plan_actions(ctx, "test", {"hostname": "myhost"}, {})
types = [a.type for a in actions]
assert "set-hostname" in types
def test_plan_locale_and_shell(ctx):
actions = _plan_actions(ctx, "test", {"locale": "en_US.UTF-8", "shell": "zsh"}, {})
types = [a.type for a in actions]
assert "set-locale" in types
assert "set-shell" in types
def test_plan_packages(ctx):
env_config = {
"packages": {
"standard": ["git", "zsh", "tmux"],
"binary": ["neovim"],
},
}
actions = _plan_actions(ctx, "test", env_config, {})
types = [a.type for a in actions]
assert "pm-update" in types
assert "install-packages" in types
assert "install-binary" in types
def test_plan_packages_uses_package_map(ctx):
ctx.manifest["package-map"] = {
"fd": {"apt": "fd-find"},
}
env_config = {
"package-manager": "apt",
"packages": {
"standard": ["fd"],
},
}
actions = _plan_actions(ctx, "test", env_config, {})
install = [a for a in actions if a.type == "install-packages"][0]
assert install.data["packages"] == ["fd-find"]
def test_plan_ssh_keygen(ctx):
env_config = {
"ssh_keygen": [
{"type": "ed25519", "comment": "test@host", "filename": "id_ed25519"},
],
}
actions = _plan_actions(ctx, "test", env_config, {})
types = [a.type for a in actions]
assert "generate-ssh-key" in types
def test_plan_runcmd(ctx):
env_config = {"runcmd": ["echo hello", "mkdir -p ~/tmp"]}
actions = _plan_actions(ctx, "test", env_config, {})
run_cmds = [a for a in actions if a.type == "run-command"]
assert len(run_cmds) == 2
def test_plan_requires(ctx):
env_config = {"requires": ["VAR1", "VAR2"]}
actions = _plan_actions(ctx, "test", env_config, {})
checks = [a for a in actions if a.type == "check-variable"]
assert len(checks) == 2
assert all(not a.skip_on_error for a in checks)
def test_plan_full_profile(ctx):
"""Test planning with a realistic linux-vm profile."""
env_config = {
"requires": ["TARGET_HOSTNAME"],
"os": "linux",
"hostname": "$TARGET_HOSTNAME",
"shell": "zsh",
"locale": "en_US.UTF-8",
"packages": {
"standard": ["zsh", "tmux", "git"],
"binary": ["neovim"],
},
"ssh_keygen": [{"type": "ed25519", "comment": "test"}],
"configs": ["bin"],
"runcmd": ["mkdir -p ~/projects"],
}
actions = _plan_actions(ctx, "linux-vm", env_config, {"TARGET_HOSTNAME": "myvm"})
assert len(actions) >= 8
types = [a.type for a in actions]
assert "check-variable" in types
assert "set-hostname" in types
assert "set-locale" in types
assert "set-shell" in types
assert "pm-update" in types
assert "install-packages" in types
assert "install-binary" in types
assert "generate-ssh-key" in types
assert "link-config" in types
assert "run-command" in types
def test_get_profiles_from_manifest(ctx): def test_get_profiles_from_manifest(ctx):
ctx.manifest = {"profiles": {"linux": {"os": "linux"}}} ctx.manifest = {"profiles": {"linux": {"os": "linux"}}}
assert "linux" in _get_profiles(ctx) assert "linux" in _get_profiles(ctx)
@@ -151,38 +56,88 @@ def test_get_profiles_rejects_environments(ctx):
def test_resolve_package_manager_explicit_value(ctx): def test_resolve_package_manager_explicit_value(ctx):
assert _resolve_package_manager(ctx, {"package-manager": "dnf"}) == "dnf" assert _resolve_package_manager(ctx, {"os": "linux", "package-manager": "dnf"}) == "dnf"
def test_resolve_package_manager_linux_ubuntu(ctx): def test_resolve_package_manager_linux_auto_apt(monkeypatch, ctx):
os_release = "ID=ubuntu\nID_LIKE=debian" monkeypatch.setattr("flow.commands.bootstrap.shutil.which", lambda name: "/usr/bin/apt" if name == "apt" else None)
assert _resolve_package_manager(ctx, {}, os_release_text=os_release) == "apt" assert _resolve_package_manager(ctx, {"os": "linux"}) == "apt"
def test_resolve_package_manager_linux_fedora(ctx): def test_resolve_package_manager_linux_auto_dnf(monkeypatch, ctx):
os_release = "ID=fedora\nID_LIKE=rhel" monkeypatch.setattr("flow.commands.bootstrap.shutil.which", lambda name: "/usr/bin/dnf" if name == "dnf" else None)
assert _resolve_package_manager(ctx, {}, os_release_text=os_release) == "dnf" assert _resolve_package_manager(ctx, {"os": "linux"}) == "dnf"
def test_resolve_package_name_with_package_map(ctx): def test_resolve_package_manager_requires_os(ctx):
ctx.manifest["package-map"] = { with pytest.raises(RuntimeError, match="must be set"):
_resolve_package_manager(ctx, {})
def test_normalize_package_entry_string():
assert _normalize_profile_package_entry("git") == {"name": "git"}
def test_normalize_package_entry_type_prefix():
assert _normalize_profile_package_entry("cask/wezterm") == {"name": "wezterm", "type": "cask"}
def test_normalize_package_entry_object():
out = _normalize_profile_package_entry({"name": "docker", "allow_sudo": True})
assert out["name"] == "docker"
assert out["allow_sudo"] is True
def test_resolve_package_spec_uses_catalog_type(ctx):
catalog = {
"fd": { "fd": {
"apt": "fd-find", "name": "fd",
"dnf": "fd-find", "type": "pkg",
"brew": "fd", "sources": {"apt": "fd-find"},
} }
} }
assert _resolve_package_name(ctx, "fd", "apt") == "fd-find" resolved = _resolve_package_spec(catalog, {"name": "fd"})
assert _resolve_package_name(ctx, "fd", "dnf") == "fd-find" assert resolved["type"] == "pkg"
assert _resolve_package_name(ctx, "fd", "brew") == "fd" assert resolved["sources"]["apt"] == "fd-find"
def test_resolve_package_name_falls_back_with_warning(ctx): def test_resolve_package_spec_defaults_to_pkg(ctx):
warnings = [] resolved = _resolve_package_spec({}, {"name": "git"})
ctx.console.warn = warnings.append assert resolved["type"] == "pkg"
ctx.manifest["package-map"] = {"python3-dev": {"apt": "python3-dev"}}
resolved = _resolve_package_name(ctx, "python3-dev", "dnf", warn_missing=True)
assert resolved == "python3-dev" def test_resolve_package_spec_profile_override(ctx):
assert warnings catalog = {
"neovim": {
"name": "neovim",
"type": "binary",
"version": "0.10.4",
}
}
resolved = _resolve_package_spec(catalog, {"name": "neovim", "post-install": "echo ok"})
assert resolved["type"] == "binary"
assert resolved["post-install"] == "echo ok"
def test_resolve_pkg_source_name_with_mapping(ctx):
spec = {"name": "fd", "sources": {"apt": "fd-find", "dnf": "fd-find", "brew": "fd"}}
assert _resolve_pkg_source_name(spec, "apt") == "fd-find"
assert _resolve_pkg_source_name(spec, "dnf") == "fd-find"
assert _resolve_pkg_source_name(spec, "brew") == "fd"
def test_resolve_pkg_source_name_fallback_to_name(ctx):
spec = {"name": "ripgrep", "sources": {"apt": "ripgrep"}}
assert _resolve_pkg_source_name(spec, "dnf") == "ripgrep"
def test_ensure_required_variables_missing_raises():
with pytest.raises(RuntimeError, match="Missing required environment variables"):
_ensure_required_variables({"requires": ["USER_EMAIL", "TARGET_HOSTNAME"]}, {"USER_EMAIL": "a@b"})
def test_ensure_required_variables_accepts_vars(monkeypatch):
env = dict(os.environ)
env["USER_EMAIL"] = "a@b"
env["TARGET_HOSTNAME"] = "devbox"
_ensure_required_variables({"requires": ["USER_EMAIL", "TARGET_HOSTNAME"]}, env)

View File

@@ -7,7 +7,9 @@ import sys
def _clean_env(): def _clean_env():
"""Return env dict without DF_* variables that trigger enter's guard.""" """Return env dict without DF_* variables that trigger enter's guard."""
return {k: v for k, v in os.environ.items() if not k.startswith("DF_")} env = {k: v for k, v in os.environ.items() if not k.startswith("DF_")}
env["FLOW_SKIP_SUDO_REFRESH"] = "1"
return env
def test_version(): def test_version():

View File

@@ -1,37 +1,34 @@
"""Tests for flow.core.config.""" """Tests for flow.core.config."""
from pathlib import Path import pytest
from flow.core.config import AppConfig, FlowContext, load_config, load_manifest from flow.core.config import AppConfig, load_config, load_manifest
def test_load_config_missing_file(tmp_path): def test_load_config_missing_path(tmp_path):
cfg = load_config(tmp_path / "nonexistent") cfg = load_config(tmp_path / "nonexistent")
assert isinstance(cfg, AppConfig) assert isinstance(cfg, AppConfig)
assert cfg.dotfiles_url == "" assert cfg.dotfiles_url == ""
assert cfg.container_registry == "registry.tomastm.com" assert cfg.container_registry == "registry.tomastm.com"
def test_load_config_ini(tmp_path): def test_load_config_merged_yaml(tmp_path):
config_file = tmp_path / "config" (tmp_path / "10-config.yaml").write_text(
config_file.write_text(""" "repository:\n"
[repository] " dotfiles-url: git@github.com:user/dots.git\n"
dotfiles_url=git@github.com:user/dots.git " dotfiles-branch: dev\n"
dotfiles_branch=dev "paths:\n"
" projects-dir: ~/code\n"
"defaults:\n"
" container-registry: my.registry.com\n"
" container-tag: v1\n"
" tmux-session: main\n"
"targets:\n"
" personal: orb personal@orb\n"
" work@ec2: work.ec2.internal ~/.ssh/id_work\n"
)
[paths] cfg = load_config(tmp_path)
projects_dir=~/code
[defaults]
container_registry=my.registry.com
container_tag=v1
tmux_session=main
[targets]
personal=orb personal@orb
work=ec2 work.ec2.internal ~/.ssh/id_work
""")
cfg = load_config(config_file)
assert cfg.dotfiles_url == "git@github.com:user/dots.git" assert cfg.dotfiles_url == "git@github.com:user/dots.git"
assert cfg.dotfiles_branch == "dev" assert cfg.dotfiles_branch == "dev"
assert cfg.projects_dir == "~/code" assert cfg.projects_dir == "~/code"
@@ -40,31 +37,28 @@ work=ec2 work.ec2.internal ~/.ssh/id_work
assert cfg.tmux_session == "main" assert cfg.tmux_session == "main"
assert len(cfg.targets) == 2 assert len(cfg.targets) == 2
assert cfg.targets[0].namespace == "personal" assert cfg.targets[0].namespace == "personal"
assert cfg.targets[0].platform == "orb"
assert cfg.targets[0].ssh_host == "personal@orb"
assert cfg.targets[1].ssh_identity == "~/.ssh/id_work" assert cfg.targets[1].ssh_identity == "~/.ssh/id_work"
def test_load_manifest_missing_file(tmp_path): def test_load_manifest_missing_path(tmp_path):
result = load_manifest(tmp_path / "nonexistent.yaml") result = load_manifest(tmp_path / "nonexistent")
assert result == {} assert result == {}
def test_load_manifest_valid(tmp_path): def test_load_manifest_valid_directory(tmp_path):
manifest = tmp_path / "manifest.yaml" (tmp_path / "manifest.yaml").write_text(
manifest.write_text(""" "profiles:\n"
profiles: " linux-vm:\n"
linux-vm: " os: linux\n"
os: linux " hostname: devbox\n"
hostname: test )
""") result = load_manifest(tmp_path)
result = load_manifest(manifest)
assert "profiles" in result
assert result["profiles"]["linux-vm"]["os"] == "linux" assert result["profiles"]["linux-vm"]["os"] == "linux"
def test_load_manifest_non_dict(tmp_path): def test_load_manifest_non_dict_raises(tmp_path):
manifest = tmp_path / "manifest.yaml" bad = tmp_path / "bad.yaml"
manifest.write_text("- a\n- b\n") bad.write_text("- a\n- b\n")
result = load_manifest(manifest)
assert result == {} with pytest.raises(RuntimeError, match="must contain a mapping"):
load_manifest(bad)

View File

@@ -1,80 +1,75 @@
"""Tests for flow.commands.dotfiles — link/unlink/status logic.""" """Tests for flow.commands.dotfiles discovery and path resolution."""
import json
from pathlib import Path
import pytest import pytest
from flow.commands.dotfiles import _discover_packages, _resolve_edit_target, _walk_package from flow.commands.dotfiles import _discover_packages, _resolve_edit_target, _walk_package
from flow.core.config import AppConfig, FlowContext
from flow.core.console import ConsoleLogger
from flow.core.platform import PlatformInfo
@pytest.fixture def _make_tree(tmp_path):
def dotfiles_tree(tmp_path): flow_root = tmp_path
"""Create a sample dotfiles directory structure.""" shared = flow_root / "_shared"
common = tmp_path / "common" (shared / "zsh").mkdir(parents=True)
(common / "zsh").mkdir(parents=True) (shared / "zsh" / ".zshrc").write_text("# zsh")
(common / "zsh" / ".zshrc").write_text("# zshrc") (shared / "tmux").mkdir(parents=True)
(common / "zsh" / ".zshenv").write_text("# zshenv") (shared / "tmux" / ".tmux.conf").write_text("# tmux")
(common / "tmux").mkdir(parents=True)
(common / "tmux" / ".tmux.conf").write_text("# tmux")
profiles = tmp_path / "profiles" / "work" profile = flow_root / "work"
(profiles / "git").mkdir(parents=True) (profile / "git").mkdir(parents=True)
(profiles / "git" / ".gitconfig").write_text("[user]\nname = Work") (profile / "git" / ".gitconfig").write_text("[user]\nname = Work")
return tmp_path return tmp_path
def test_discover_packages_common(dotfiles_tree): def test_discover_packages_shared_only(tmp_path):
packages = _discover_packages(dotfiles_tree) tree = _make_tree(tmp_path)
packages = _discover_packages(tree)
assert "zsh" in packages assert "zsh" in packages
assert "tmux" in packages assert "tmux" in packages
assert "git" not in packages # git is only in profiles assert "git" not in packages
def test_discover_packages_with_profile(dotfiles_tree): def test_discover_packages_with_profile(tmp_path):
packages = _discover_packages(dotfiles_tree, profile="work") tree = _make_tree(tmp_path)
packages = _discover_packages(tree, profile="work")
assert "zsh" in packages assert "zsh" in packages
assert "tmux" in packages assert "tmux" in packages
assert "git" in packages assert "git" in packages
def test_discover_packages_profile_overrides(dotfiles_tree): def test_discover_packages_profile_overrides_shared(tmp_path):
# Add zsh to work profile tree = _make_tree(tmp_path)
work_zsh = dotfiles_tree / "profiles" / "work" / "zsh" profile_zsh = tree / "work" / "zsh"
work_zsh.mkdir(parents=True) profile_zsh.mkdir(parents=True)
(work_zsh / ".zshrc").write_text("# work zshrc") (profile_zsh / ".zshrc").write_text("# work zsh")
packages = _discover_packages(dotfiles_tree, profile="work") with pytest.raises(RuntimeError, match="Conflicting dotfile targets"):
# Profile should override common from flow.commands.dotfiles import _collect_home_specs
assert packages["zsh"] == work_zsh _collect_home_specs(tree, tmp_path / "home", "work", set(), None)
def test_walk_package(dotfiles_tree): def test_walk_package_returns_relative_paths(tmp_path):
home = Path("/tmp/fakehome") tree = _make_tree(tmp_path)
source = dotfiles_tree / "common" / "zsh" source = tree / "_shared" / "zsh"
pairs = list(_walk_package(source, home))
assert len(pairs) == 2 pairs = list(_walk_package(source))
sources = {str(s.name) for s, _ in pairs} assert len(pairs) == 1
assert ".zshrc" in sources src, rel = pairs[0]
assert ".zshenv" in sources assert src.name == ".zshrc"
targets = {str(t) for _, t in pairs} assert str(rel) == ".zshrc"
assert str(home / ".zshrc") in targets
assert str(home / ".zshenv") in targets
def test_resolve_edit_target_package(dotfiles_tree): def test_resolve_edit_target_package(tmp_path):
target = _resolve_edit_target("zsh", dotfiles_dir=dotfiles_tree) tree = _make_tree(tmp_path)
assert target == dotfiles_tree / "common" / "zsh" target = _resolve_edit_target("zsh", dotfiles_dir=tree)
assert target == tree / "_shared" / "zsh"
def test_resolve_edit_target_repo_path(dotfiles_tree): def test_resolve_edit_target_repo_path(tmp_path):
target = _resolve_edit_target("common/zsh/.zshrc", dotfiles_dir=dotfiles_tree) tree = _make_tree(tmp_path)
assert target == dotfiles_tree / "common" / "zsh" / ".zshrc" target = _resolve_edit_target("_shared/zsh/.zshrc", dotfiles_dir=tree)
assert target == tree / "_shared" / "zsh" / ".zshrc"
def test_resolve_edit_target_missing_returns_none(dotfiles_tree): def test_resolve_edit_target_missing_returns_none(tmp_path):
assert _resolve_edit_target("does-not-exist", dotfiles_dir=dotfiles_tree) is None tree = _make_tree(tmp_path)
assert _resolve_edit_target("does-not-exist", dotfiles_dir=tree) is None

View File

@@ -1,298 +1,94 @@
"""Integration tests for dotfiles tree folding behavior.""" """Tests for flat-layout dotfiles helpers and state format."""
import os import json
from pathlib import Path from pathlib import Path
import pytest import pytest
from flow.commands.dotfiles import _discover_packages, _walk_package from flow.commands.dotfiles import (
from flow.core.config import AppConfig, FlowContext LinkSpec,
from flow.core.console import ConsoleLogger _collect_home_specs,
from flow.core.platform import PlatformInfo _collect_root_specs,
from flow.core.stow import LinkTree, TreeFolder _list_profiles,
_load_link_specs_from_state,
_save_link_specs_to_state,
)
@pytest.fixture def _make_flow_tree(tmp_path: Path) -> Path:
def ctx(): flow_root = tmp_path
"""Create a mock FlowContext."""
return FlowContext( (flow_root / "_shared" / "git").mkdir(parents=True)
config=AppConfig(), (flow_root / "_shared" / "git" / ".gitconfig").write_text("shared")
manifest={}, (flow_root / "_shared" / "tmux").mkdir(parents=True)
platform=PlatformInfo(), (flow_root / "_shared" / "tmux" / ".tmux.conf").write_text("tmux")
console=ConsoleLogger(),
) (flow_root / "work" / "git").mkdir(parents=True)
(flow_root / "work" / "git" / ".gitconfig").write_text("profile")
(flow_root / "work" / "nvim").mkdir(parents=True)
(flow_root / "work" / "nvim" / ".config" / "nvim").mkdir(parents=True)
(flow_root / "work" / "nvim" / ".config" / "nvim" / "init.lua").write_text("-- init")
(flow_root / "_root" / "general" / "etc").mkdir(parents=True)
(flow_root / "_root" / "general" / "etc" / "hostname").write_text("devbox")
return flow_root
@pytest.fixture def test_list_profiles_ignores_reserved_dirs(tmp_path):
def dotfiles_with_nested(tmp_path): flow_root = _make_flow_tree(tmp_path)
"""Create dotfiles with nested directory structure for folding tests.""" profiles = _list_profiles(flow_root)
common = tmp_path / "common" assert profiles == ["work"]
# nvim package with nested config
nvim = common / "nvim" / ".config" / "nvim"
nvim.mkdir(parents=True)
(nvim / "init.lua").write_text("-- init")
(nvim / "lua").mkdir()
(nvim / "lua" / "config.lua").write_text("-- config")
(nvim / "lua" / "plugins.lua").write_text("-- plugins")
# zsh package with flat structure
zsh = common / "zsh"
zsh.mkdir(parents=True)
(zsh / ".zshrc").write_text("# zshrc")
(zsh / ".zshenv").write_text("# zshenv")
return tmp_path
@pytest.fixture def test_collect_home_specs_conflict_fails(tmp_path):
def home_dir(tmp_path): flow_root = _make_flow_tree(tmp_path)
"""Create a temporary home directory."""
home = tmp_path / "home" home = tmp_path / "home"
home.mkdir() home.mkdir()
return home
with pytest.raises(RuntimeError, match="Conflicting dotfile targets"):
_collect_home_specs(flow_root, home, "work", set(), None)
def test_tree_folding_single_package(dotfiles_with_nested, home_dir): def test_collect_root_specs_maps_to_absolute_paths(tmp_path):
"""Test that a single package can be folded into directory symlink.""" flow_root = _make_flow_tree(tmp_path)
# Discover nvim package specs = _collect_root_specs(flow_root, set(), include_root=True)
packages = _discover_packages(dotfiles_with_nested) assert Path("/etc/hostname") in specs
nvim_source = packages["nvim"] assert specs[Path("/etc/hostname")].package == "_root/general"
# Build link tree
tree = LinkTree()
folder = TreeFolder(tree)
# Plan links for all nvim files
operations = []
for src, dst in _walk_package(nvim_source, home_dir):
ops = folder.plan_link(src, dst, "nvim")
operations.extend(ops)
# Execute operations
folder.execute_operations(operations, dry_run=False)
# Check that we created efficient symlinks
# In ideal case, we'd have one directory symlink instead of 3 file symlinks
nvim_config = home_dir / ".config" / "nvim"
# Verify links work
assert (nvim_config / "init.lua").exists()
assert (nvim_config / "lua" / "config.lua").exists()
def test_tree_unfolding_conflict(dotfiles_with_nested, home_dir): def test_state_round_trip(tmp_path, monkeypatch):
"""Test that tree unfolds when second package needs same directory.""" state_file = tmp_path / "linked.json"
common = dotfiles_with_nested / "common" monkeypatch.setattr("flow.commands.dotfiles.LINKED_STATE", state_file)
# Create second package that shares .config specs = {
tmux = common / "tmux" / ".config" / "tmux" Path("/home/user/.gitconfig"): LinkSpec(
tmux.mkdir(parents=True) source=Path("/repo/_shared/git/.gitconfig"),
(tmux / "tmux.conf").write_text("# tmux") target=Path("/home/user/.gitconfig"),
package="_shared/git",
# First, link nvim (can fold .config/nvim) )
tree = LinkTree()
folder = TreeFolder(tree)
nvim_source = common / "nvim"
for src, dst in _walk_package(nvim_source, home_dir):
ops = folder.plan_link(src, dst, "nvim")
folder.execute_operations(ops, dry_run=False)
# Now link tmux (should unfold if needed)
tmux_source = common / "tmux"
for src, dst in _walk_package(tmux_source, home_dir):
ops = folder.plan_link(src, dst, "tmux")
folder.execute_operations(ops, dry_run=False)
# Both packages should be linked
assert (home_dir / ".config" / "nvim" / "init.lua").exists()
assert (home_dir / ".config" / "tmux" / "tmux.conf").exists()
def test_state_format_with_directory_links(dotfiles_with_nested, home_dir):
"""Test that state file correctly tracks directory vs file links."""
tree = LinkTree()
# Add a directory link
tree.add_link(
home_dir / ".config" / "nvim",
dotfiles_with_nested / "common" / "nvim" / ".config" / "nvim",
"nvim",
is_dir_link=True,
)
# Add a file link
tree.add_link(
home_dir / ".zshrc",
dotfiles_with_nested / "common" / "zsh" / ".zshrc",
"zsh",
is_dir_link=False,
)
# Convert to state
state = tree.to_state()
# Verify format
assert state["version"] == 2
nvim_link = state["links"]["nvim"][str(home_dir / ".config" / "nvim")]
assert nvim_link["is_directory_link"] is True
zsh_link = state["links"]["zsh"][str(home_dir / ".zshrc")]
assert zsh_link["is_directory_link"] is False
def test_state_backward_compatibility_rejected(home_dir):
"""Old state format should be rejected (no backward compatibility)."""
old_state = {
"links": {
"zsh": {
str(home_dir / ".zshrc"): str(home_dir.parent / "dotfiles" / "zsh" / ".zshrc"),
}
}
} }
_save_link_specs_to_state(specs)
loaded = _load_link_specs_from_state()
assert Path("/home/user/.gitconfig") in loaded
assert loaded[Path("/home/user/.gitconfig")].package == "_shared/git"
def test_state_old_format_rejected(tmp_path, monkeypatch):
state_file = tmp_path / "linked.json"
monkeypatch.setattr("flow.commands.dotfiles.LINKED_STATE", state_file)
state_file.write_text(
json.dumps(
{
"links": {
"zsh": {
"/home/user/.zshrc": "/repo/.zshrc",
}
}
}
)
)
with pytest.raises(RuntimeError, match="Unsupported linked state format"): with pytest.raises(RuntimeError, match="Unsupported linked state format"):
LinkTree.from_state(old_state) _load_link_specs_from_state()
def test_discover_packages_with_flow_package(tmp_path):
"""Test discovering the flow package itself from dotfiles."""
common = tmp_path / "common"
# Create flow package
flow_pkg = common / "flow" / ".config" / "flow"
flow_pkg.mkdir(parents=True)
(flow_pkg / "manifest.yaml").write_text("profiles: {}")
(flow_pkg / "config").write_text("[repository]\n")
packages = _discover_packages(tmp_path)
# Flow package should be discovered like any other
assert "flow" in packages
assert packages["flow"] == common / "flow"
def test_walk_flow_package(tmp_path):
"""Test walking the flow package structure."""
flow_pkg = tmp_path / "flow"
flow_config = flow_pkg / ".config" / "flow"
flow_config.mkdir(parents=True)
(flow_config / "manifest.yaml").write_text("profiles: {}")
(flow_config / "config").write_text("[repository]\n")
home = Path("/tmp/fakehome")
pairs = list(_walk_package(flow_pkg, home))
# Should find both files
assert len(pairs) == 2
targets = [str(t) for _, t in pairs]
assert str(home / ".config" / "flow" / "manifest.yaml") in targets
assert str(home / ".config" / "flow" / "config") in targets
def test_conflict_detection_before_execution(dotfiles_with_nested, home_dir):
"""Test that conflicts are detected before any changes are made."""
# Create existing file that conflicts
existing = home_dir / ".zshrc"
existing.parent.mkdir(parents=True, exist_ok=True)
existing.write_text("# existing zshrc")
# Try to link package that wants .zshrc
tree = LinkTree()
folder = TreeFolder(tree)
zsh_source = dotfiles_with_nested / "common" / "zsh"
operations = []
for src, dst in _walk_package(zsh_source, home_dir):
ops = folder.plan_link(src, dst, "zsh")
operations.extend(ops)
# Should detect conflict
conflicts = folder.detect_conflicts(operations)
assert len(conflicts) > 0
assert any("already exists" in c for c in conflicts)
# Original file should be unchanged
assert existing.read_text() == "# existing zshrc"
def test_profile_switching_relink(tmp_path):
"""Test switching between profiles maintains correct links."""
# Create profiles
common = tmp_path / "common"
profiles = tmp_path / "profiles"
# Common zsh
(common / "zsh").mkdir(parents=True)
(common / "zsh" / ".zshrc").write_text("# common zsh")
# Work profile override
(profiles / "work" / "zsh").mkdir(parents=True)
(profiles / "work" / "zsh" / ".zshrc").write_text("# work zsh")
# Personal profile override
(profiles / "personal" / "zsh").mkdir(parents=True)
(profiles / "personal" / "zsh" / ".zshrc").write_text("# personal zsh")
# Test that profile discovery works correctly
work_packages = _discover_packages(tmp_path, profile="work")
personal_packages = _discover_packages(tmp_path, profile="personal")
# Both should find zsh, but from different sources
assert "zsh" in work_packages
assert "zsh" in personal_packages
assert work_packages["zsh"] != personal_packages["zsh"]
def test_can_fold_empty_directory():
"""Test can_fold with empty directory."""
tree = LinkTree()
target_dir = Path("/home/user/.config/nvim")
# Empty directory - should be able to fold
assert tree.can_fold(target_dir, "nvim")
def test_can_fold_with_subdirectories():
"""Test can_fold with nested directory structure."""
tree = LinkTree()
base = Path("/home/user/.config/nvim")
# Add nested files from same package
tree.add_link(base / "init.lua", Path("/dotfiles/nvim/init.lua"), "nvim")
tree.add_link(base / "lua" / "config.lua", Path("/dotfiles/nvim/lua/config.lua"), "nvim")
tree.add_link(base / "lua" / "plugins" / "init.lua", Path("/dotfiles/nvim/lua/plugins/init.lua"), "nvim")
# Should be able to fold at base level
assert tree.can_fold(base, "nvim")
# Add file from different package
tree.add_link(base / "other.lua", Path("/dotfiles/other/other.lua"), "other")
# Now cannot fold
assert not tree.can_fold(base, "nvim")
def test_execute_operations_creates_parent_dirs(tmp_path):
"""Test that execute_operations creates necessary parent directories."""
tree = LinkTree()
folder = TreeFolder(tree)
source = tmp_path / "dotfiles" / "nvim" / ".config" / "nvim" / "init.lua"
target = tmp_path / "home" / ".config" / "nvim" / "init.lua"
# Create source
source.parent.mkdir(parents=True)
source.write_text("-- init")
# Target parent doesn't exist yet
assert not target.parent.exists()
# Plan and execute
ops = folder.plan_link(source, target, "nvim")
folder.execute_operations(ops, dry_run=False)
# Parent should be created
assert target.parent.exists()
assert target.is_symlink()

View File

@@ -18,15 +18,15 @@ from flow.core.paths import (
def test_config_dir_under_home(): def test_config_dir_under_home():
assert ".config/devflow" in str(CONFIG_DIR) assert ".config/flow" in str(CONFIG_DIR)
def test_data_dir_under_home(): def test_data_dir_under_home():
assert ".local/share/devflow" in str(DATA_DIR) assert ".local/share/flow" in str(DATA_DIR)
def test_state_dir_under_home(): def test_state_dir_under_home():
assert ".local/state/devflow" in str(STATE_DIR) assert ".local/state/flow" in str(STATE_DIR)
def test_manifest_file_in_config_dir(): def test_manifest_file_in_config_dir():
@@ -34,7 +34,7 @@ def test_manifest_file_in_config_dir():
def test_config_file_in_config_dir(): def test_config_file_in_config_dir():
assert CONFIG_FILE == CONFIG_DIR / "config" assert CONFIG_FILE == CONFIG_DIR / "config.yaml"
def test_dotfiles_dir(): def test_dotfiles_dir():

View File

@@ -11,7 +11,7 @@ def test_detect_platform_returns_platforminfo():
info = detect_platform() info = detect_platform()
assert isinstance(info, PlatformInfo) assert isinstance(info, PlatformInfo)
assert info.os in ("linux", "macos") assert info.os in ("linux", "macos")
assert info.arch in ("amd64", "arm64") assert info.arch in ("x64", "arm64")
assert info.platform == f"{info.os}-{info.arch}" assert info.platform == f"{info.os}-{info.arch}"
@@ -27,4 +27,3 @@ def test_detect_platform_unsupported_arch(monkeypatch):
detect_platform() detect_platform()

View File

@@ -1,215 +1,81 @@
"""Tests for self-hosting flow config from dotfiles repository.""" """Tests for self-hosted merged YAML config loading."""
from pathlib import Path from pathlib import Path
from unittest.mock import patch
import pytest import pytest
import yaml
from flow.core import paths as paths_module from flow.core import paths as paths_module
from flow.core.config import load_config, load_manifest from flow.core.config import load_config, load_manifest
@pytest.fixture @pytest.fixture
def mock_paths(tmp_path, monkeypatch): def mock_roots(tmp_path, monkeypatch):
"""Mock path constants for testing.""" local_root = tmp_path / "local-flow"
config_dir = tmp_path / "config" dotfiles_root = tmp_path / "dotfiles" / "_shared" / "flow" / ".config" / "flow"
dotfiles_dir = tmp_path / "dotfiles"
config_dir.mkdir() local_root.mkdir(parents=True)
dotfiles_dir.mkdir() dotfiles_root.mkdir(parents=True)
test_paths = { monkeypatch.setattr(paths_module, "CONFIG_DIR", local_root)
"config_dir": config_dir, monkeypatch.setattr(paths_module, "DOTFILES_FLOW_CONFIG", dotfiles_root)
"dotfiles_dir": dotfiles_dir,
"local_config": config_dir / "config", return {
"local_manifest": config_dir / "manifest.yaml", "local": local_root,
"dotfiles_config": dotfiles_dir / "flow" / ".config" / "flow" / "config", "dotfiles": dotfiles_root,
"dotfiles_manifest": dotfiles_dir / "flow" / ".config" / "flow" / "manifest.yaml",
} }
# Patch at the paths module level
monkeypatch.setattr(paths_module, "CONFIG_FILE", test_paths["local_config"])
monkeypatch.setattr(paths_module, "MANIFEST_FILE", test_paths["local_manifest"])
monkeypatch.setattr(paths_module, "DOTFILES_CONFIG", test_paths["dotfiles_config"])
monkeypatch.setattr(paths_module, "DOTFILES_MANIFEST", test_paths["dotfiles_manifest"])
return test_paths def test_load_manifest_priority_dotfiles_first(mock_roots):
(mock_roots["local"] / "profiles.yaml").write_text("profiles:\n local: {os: linux}\n")
(mock_roots["dotfiles"] / "profiles.yaml").write_text("profiles:\n dotfiles: {os: macos}\n")
def test_load_manifest_priority_dotfiles_first(mock_paths):
"""Test that dotfiles manifest takes priority over local."""
# Create both manifests
local_manifest = mock_paths["local_manifest"]
dotfiles_manifest = mock_paths["dotfiles_manifest"]
local_manifest.write_text("profiles:\n local:\n os: linux")
dotfiles_manifest.parent.mkdir(parents=True)
dotfiles_manifest.write_text("profiles:\n dotfiles:\n os: macos")
# Should load from dotfiles
manifest = load_manifest() manifest = load_manifest()
assert "dotfiles" in manifest.get("profiles", {}) assert "dotfiles" in manifest.get("profiles", {})
assert "local" not in manifest.get("profiles", {}) assert "local" not in manifest.get("profiles", {})
def test_load_manifest_fallback_to_local(mock_paths): def test_load_manifest_fallback_to_local(mock_roots):
"""Test fallback to local manifest when dotfiles doesn't exist.""" (mock_roots["local"] / "profiles.yaml").write_text("profiles:\n local: {os: linux}\n")
local_manifest = mock_paths["local_manifest"]
local_manifest.write_text("profiles:\n local:\n os: linux") # Remove dotfiles yaml file so local takes over.
dot_yaml = mock_roots["dotfiles"] / "profiles.yaml"
if dot_yaml.exists():
dot_yaml.unlink()
# Dotfiles manifest doesn't exist
manifest = load_manifest() manifest = load_manifest()
assert "local" in manifest.get("profiles", {}) assert "local" in manifest.get("profiles", {})
def test_load_manifest_empty_when_none_exist(mock_paths): def test_load_manifest_empty_when_none_exist(mock_roots):
"""Test empty dict returned when no manifests exist."""
manifest = load_manifest() manifest = load_manifest()
assert manifest == {} assert manifest == {}
def test_load_config_priority_dotfiles_first(mock_paths): def test_load_config_from_merged_yaml(mock_roots):
"""Test that dotfiles config takes priority over local.""" (mock_roots["dotfiles"] / "config.yaml").write_text(
local_config = mock_paths["local_config"] "repository:\n"
dotfiles_config = mock_paths["dotfiles_config"] " dotfiles-url: git@github.com:user/dotfiles.git\n"
"defaults:\n"
# Create local config " container-registry: registry.example.com\n"
local_config.write_text(
"[repository]\n"
"dotfiles_url = https://github.com/user/dotfiles-local.git\n"
) )
# Create dotfiles config cfg = load_config()
dotfiles_config.parent.mkdir(parents=True) assert cfg.dotfiles_url == "git@github.com:user/dotfiles.git"
dotfiles_config.write_text( assert cfg.container_registry == "registry.example.com"
"[repository]\n"
"dotfiles_url = https://github.com/user/dotfiles-from-repo.git\n"
)
# Should load from dotfiles
config = load_config()
assert "dotfiles-from-repo" in config.dotfiles_url
def test_load_config_fallback_to_local(mock_paths): def test_yaml_merge_is_alphabetical_last_writer_wins(mock_roots):
"""Test fallback to local config when dotfiles doesn't exist.""" (mock_roots["local"] / "10-a.yaml").write_text("profiles:\n a: {os: linux}\n")
local_config = mock_paths["local_config"] (mock_roots["local"] / "20-b.yaml").write_text("profiles:\n b: {os: linux}\n")
local_config.write_text(
"[repository]\n"
"dotfiles_url = https://github.com/user/dotfiles-local.git\n"
)
# Dotfiles config doesn't exist manifest = load_manifest(mock_roots["local"])
config = load_config() assert "b" in manifest.get("profiles", {})
assert "dotfiles-local" in config.dotfiles_url assert "a" not in manifest.get("profiles", {})
def test_load_config_empty_when_none_exist(mock_paths): def test_explicit_file_path_loads_single_yaml(tmp_path):
"""Test default config returned when no configs exist.""" one_file = tmp_path / "single.yaml"
config = load_config() one_file.write_text("profiles:\n only: {os: linux}\n")
assert config.dotfiles_url == ""
assert config.dotfiles_branch == "main"
manifest = load_manifest(one_file)
def test_self_hosting_workflow(tmp_path, monkeypatch): assert "only" in manifest["profiles"]
"""Test complete self-hosting workflow.
Simulates:
1. User has dotfiles repo with flow config
2. Flow links its own config from dotfiles
3. Flow reads from self-hosted location
"""
# Setup paths
home = tmp_path / "home"
dotfiles = tmp_path / "dotfiles"
home.mkdir()
dotfiles.mkdir()
# Create flow package in dotfiles
flow_pkg = dotfiles / "flow" / ".config" / "flow"
flow_pkg.mkdir(parents=True)
# Create manifest in dotfiles
manifest_content = {
"profiles": {
"test-env": {
"os": "linux",
"packages": {"standard": ["git", "vim"]},
}
}
}
(flow_pkg / "manifest.yaml").write_text(yaml.dump(manifest_content))
# Create config in dotfiles
(flow_pkg / "config").write_text(
"[repository]\n"
"dotfiles_url = https://github.com/user/dotfiles.git\n"
)
# Mock paths to use our temp directories
monkeypatch.setattr(paths_module, "DOTFILES_MANIFEST", flow_pkg / "manifest.yaml")
monkeypatch.setattr(paths_module, "DOTFILES_CONFIG", flow_pkg / "config")
monkeypatch.setattr(paths_module, "MANIFEST_FILE", home / ".config" / "devflow" / "manifest.yaml")
monkeypatch.setattr(paths_module, "CONFIG_FILE", home / ".config" / "devflow" / "config")
# Load config and manifest - should come from dotfiles
manifest = load_manifest()
config = load_config()
assert "test-env" in manifest.get("profiles", {})
assert "github.com/user/dotfiles.git" in config.dotfiles_url
def test_manifest_cascade_with_symlink(tmp_path, monkeypatch):
"""Test that loading works correctly when symlink is used."""
# Setup
dotfiles = tmp_path / "dotfiles"
home_config = tmp_path / "home" / ".config" / "flow"
flow_pkg = dotfiles / "flow" / ".config" / "flow"
flow_pkg.mkdir(parents=True)
home_config.mkdir(parents=True)
# Create manifest in dotfiles
manifest_content = {"profiles": {"from-dotfiles": {"os": "linux"}}}
(flow_pkg / "manifest.yaml").write_text(yaml.dump(manifest_content))
# Create symlink from home config to dotfiles
manifest_link = home_config / "manifest.yaml"
manifest_link.symlink_to(flow_pkg / "manifest.yaml")
# Mock paths
monkeypatch.setattr(paths_module, "DOTFILES_MANIFEST", flow_pkg / "manifest.yaml")
monkeypatch.setattr(paths_module, "MANIFEST_FILE", manifest_link)
# Load - should work through symlink
manifest = load_manifest()
assert "from-dotfiles" in manifest.get("profiles", {})
def test_config_priority_documentation(mock_paths):
"""Document the config loading priority for users."""
# This test serves as documentation of the cascade behavior
# Priority 1: Dotfiles repo (self-hosted)
dotfiles_manifest = mock_paths["dotfiles_manifest"]
dotfiles_manifest.parent.mkdir(parents=True)
dotfiles_manifest.write_text("profiles:\n priority-1: {}")
manifest = load_manifest()
assert "priority-1" in manifest.get("profiles", {})
# If we remove dotfiles, falls back to Priority 2: Local override
dotfiles_manifest.unlink()
local_manifest = mock_paths["local_manifest"]
local_manifest.write_text("profiles:\n priority-2: {}")
manifest = load_manifest()
assert "priority-2" in manifest.get("profiles", {})
# If neither exists, Priority 3: Empty fallback
local_manifest.unlink()
manifest = load_manifest()
assert manifest == {}

View File

@@ -50,3 +50,8 @@ def test_substitute_template_non_string():
def test_substitute_template_no_placeholders(): def test_substitute_template_no_placeholders():
result = substitute_template("plain text", {"os": "linux"}) result = substitute_template("plain text", {"os": "linux"})
assert result == "plain text" assert result == "plain text"
def test_substitute_template_env_namespace():
result = substitute_template("{{ env.USER_EMAIL }}", {"env": {"USER_EMAIL": "you@example.com"}})
assert result == "you@example.com"