This guide provides detailed SLAC S3DF-specific configuration information for customizing and understanding the Spinal Tap Kubernetes deployment.
For deployment instructions, see README.md.
- Storage Configuration
- Authentication and Access Control
- Ingress Configuration
- Namespace Configuration
- Resource Limits
- SLAC Best Practices
- Support Contacts
S3DF uses facility-specific storage classes to control access to filesystem paths. Common storageClasses:
sdf-data-neutrino- For neutrino physics datasdf-data-lcls- For LCLS datasdf-data-atlas- For ATLAS datasdf-data-rubin- For Rubin Observatory data
If you need access to a filesystem path not covered by existing storageClasses:
Email: s3df-help@slac.stanford.edu
Include:
- Filesystem path (e.g.,
/sdf/data/neutrino/myproject) - Access justification
- Estimated data size
- vCluster name (e.g.,
neutrino-ml)
To use a different facility, update pvc.yaml:
spec:
storageClassName: sdf-data-YOUR_FACILITY # e.g., sdf-data-lcls, sdf-data-atlasAnd update the subPath in deployment.yaml volumeMounts section to match your facility's directory structure.
The pvc.yaml defines the storage request. For the neutrino facility, it's pre-configured for read-only access:
spec:
accessModes:
- ReadOnlyMany # Read-only access
storageClassName: sdf-data-neutrino
resources:
requests:
storage: 1Gi # Minimal size for mounting existing dataThe storage will be mounted at /data inside the container, accessing only the /sdf/data/neutrino/spinal-tap/ subdirectory from the SDF filesystem (via subPath in the deployment).
How it works:
- The
sdf-data-neutrinostorage class is pre-configured by S3DF to map to/sdf/data/neutrinoon the filesystem - The
subPath: spinal-taplimits the mount to thespinal-tapsubdirectory - The
mountPath: /datamakes it available at/datainside the container
Exposing Data via Symlinks:
On the SDF filesystem, create a directory structure like:
/sdf/data/neutrino/spinal-tap/
├── run123 -> /sdf/data/neutrino/reconstruction/run123
├── run124 -> /sdf/data/neutrino/reconstruction/run124
└── analysis-2024 -> /sdf/data/neutrino/analysis/2024Inside the container, users will see:
/data/run123/
/data/run124/
/data/analysis-2024/
This allows you to selectively expose only the data you want accessible through Spinal Tap.
Spinal Tap includes optional authentication to control access to experiment-specific data. When deployed to Kubernetes, authentication is enabled by default.
For complete authentication setup, see AUTHENTICATION.md.
Users are restricted to their experiment's data directory:
- 2x2:
/data/2x2/ - NDLAR:
/data/ndlar/ - ICARUS:
/data/icarus/ - SBND:
/data/sbnd/
You can designate folders that all authenticated users can access, regardless of experiment. This is useful for:
- Calibration data
- Common analysis tools
- Documentation
- Tutorial/example files
Configure shared folders in deployment.yaml:
env:
- name: SPINAL_TAP_SHARED_FOLDERS
value: "/data/generic,/data/common,/data/calibration"Default: /data/generic
For multi-experiment access with shared resources:
/sdf/data/neutrino/
├── 2x2/
│ └── spine/prod/ # 2x2-only files
├── ndlar/
│ └── spine/prod/ # NDLAR-only files
├── icarus/
│ └── spine/prod/ # ICARUS-only files
├── sbnd/
│ └── spine/prod/ # SBND-only files
├── generic/
│ └── prod/ # Generic data files
└── public_html/
└── /spine_workshop # Public, small-scale files for all experimentsThe application is accessible at:
https://spinal-tap.slac.stanford.edu
To use a different subdomain, update k8s/ingress.yaml:
spec:
rules:
- host: my-app.slac.stanford.edu # ← Change thisThen coordinate with S3DF admins for DNS setup.
HTTPS is automatically configured by S3DF ingress controller. No additional cert-manager configuration needed.
By default, resources are deployed in the spinal-tap namespace (defined in kustomization.yaml).
- Edit
k8s/kustomization.yaml:
namespace: my-namespace # ← Change this- Create the namespace:
kubectl create namespace my-namespace- Ensure your vCluster has access to that namespace (contact S3DF support if needed).
Current defaults in deployment.yaml:
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"Increase memory if:
- Pods are OOMKilled (
kubectl describe podshows exit code 137) - Loading large datasets
- High concurrent user load
Increase CPU if:
- Slow response times
- High CPU usage (
kubectl top podshows CPU near limit)
Check current resource usage:
kubectl top pod -n spinal-tapCheck for resource-related issues:
kubectl describe pod -n spinal-tap -l app=spinal-tap | grep -E "State|Reason|Exit Code|Memory|CPU"Adjust based on your vCluster quotas and actual usage patterns.
This deployment follows patterns from slaclab/slac-k8s-examples:
- Use Kustomize: Manage all resources via
kustomization.yamlfor consistent deployments - Use Makefile: Standard interface (
make apply,make dump,make delete) - Separate YAML files: One Kubernetes resource per file for clarity
- StorageClasses: Use facility-specific storage classes (e.g.,
sdf-data-neutrino) - Labels: Consistent labeling with
app: spinal-tapfor easy resource selection - ReadOnly mounts: Use
ReadOnlyManyfor shared data access across replicas - Symlinks: Expose only needed data via symlinks rather than mounting entire filesystems
- S3DF Infrastructure & Kubernetes:
s3df-help@slac.stanford.edu - vCluster Access & Permissions:
s3df-help@slac.stanford.edu - Storage Class Approvals:
s3df-help@slac.stanford.edu - Application Issues: GitHub Issues