Skip to content

Commit 15e896e

Browse files
committed
Merge branch 'source' of github.com:UTAustin-SwarmLab/UTAustin-SwarmLab.github.io into source
2 parents 0fa33b5 + 685ead7 commit 15e896e

15 files changed

+208
-1
lines changed

_bibliography/references.bib

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,28 @@
22
---
33
references
44
==========
5+
6+
@article{shah2025neus,
7+
title={NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning},
8+
author={Shah, Sahil and Sharan, SP and Goel, Harsh and Choi, Minkyu and Munir, Mustafa and Pasula, Manvik and Marculescu, Radu and Chinchali, Sandeep},
9+
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
10+
year={2026}
11+
}
12+
13+
@article{shah2025challenge,
14+
title={A Challenge to Build Neuro-Symbolic Video Agents},
15+
author={Shah, Sahil and Goel, Harsh and Narasimhan, Sai Shankar and Choi, Minkyu and Sharan, SP and Akcin, Oguzhan and Chinchali, Sandeep},
16+
journal={International Conference on Neuro-symbolic Systems},
17+
year={2025}
18+
}
19+
20+
@article{sharan2025neuro,
21+
title={Neuro-symbolic evaluation of text-to-video models using formal verification},
22+
author={Sharan, SP and Choi, Minkyu and Shah, Sahil and Goel, Harsh and Omama, Mohammad and Chinchali, Sandeep},
23+
journal={Proceedings of the Computer Vision and Pattern Recognition Conference},
24+
year={2025}
25+
}
26+
527
@article{baser2025fairsynergy,
628
title={FairSynergy: Fair Resource Allocation for Fleet Intelligence},
729
author={Baser, Oguzhan and Kale, Kaan and Li, Po-han and Chinchali, Sandeep},

_data/news.yml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
1+
- date: 2025-11-07
2+
details: >-
3+
Sahil Shah's paper <a href="https://arxiv.org/abs/2509.18041"> NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning </a> was accepted to AAAI 2026!
4+
- date: 2025-10-10
5+
details: >-
6+
Sai Shankar's paper <a href="https://arxiv.org/abs/2410.12652"> Constrained Posterior Sampling: Time Series Generation with Hard Constraints </a> was accepted to NeurIPS 2025!
17
- date: 2025-09-21
28
details: >-
39
Po-han's paper <a href="https://openreview.net/forum?id=C35FCYZBXp"> VIBE: Annotation-Free Video-to-Text Information Bottleneck Evaluation for TL;DR </a> was accepted to NeurIPS 2025!
@@ -6,4 +12,4 @@
612
Oguzhan B.'s paper <a href="https://openreview.net/forum?id=M1e2PEMLp2"> Fair Resource Allocation for Fleet Intelligence </a> was accepted to GLOBECOM 2025!
713
- date: 2025-08-01
814
details: >-
9-
Oguzhan A.'s paper <a href="https://openreview.net/forum?id=M1e2PEMLp2"> Distributed Upload and Active Labeling for Resource-Constrained Fleet Learning </a> was accepted to CoRL 2025!
15+
Oguzhan A.'s paper <a href="https://openreview.net/forum?id=M1e2PEMLp2"> Distributed Upload and Active Labeling for Resource-Constrained Fleet Learning </a> was accepted to CoRL 2025!

_posts/2025-09-10-FairSynergy.md

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
title: 'FairSynergy: Fair Resource Allocation for Fleet Intelligence'
3+
description: GLOBECOM 2025 paper
4+
categories: blog
5+
---
6+
7+
*By Oguzhan B.*
8+
9+
# FairSynergy: Fair Resource Allocation for Fleet Intelligence
10+
11+
2025 IEEE Global Communications Conference (GLOBECOM 2025)
12+
13+
**TLDR:** Give or substitute each extra unit of compute or memory to the agent that benefits most to lift fleet accuracy by up to +25% in inference and +11% in training, with even bigger gains as fleets grow!
14+
15+
<video controls width="300" height="50">
16+
<source src="{{site.baseurl}}/images/post/fairsynergy_podcast.mp3" type="audio/mpeg">
17+
Your browser does not support the video tag.
18+
</video>
19+
20+
[arXiv](https://arxiv.org/abs/2509.03353) |
21+
[Code](https://github.com/UTAustin-SwarmLab/Fair-Synergy)
22+
23+
## Motivation
24+
Intelligent fleets involving machine learning (ML) deployed drones, autonomous cars, or phones rarely look uniform. Some fleet agents have much better on-board compute or memory, while others face harder inputs, such as hard-to-parse images and complex prompts. While cloud assists fleets by allocating its limited resources, it is not really trivial how to allocate for optimal collective performance across the heterogeneous agents. If we split cloud resources evenly, we waste the budget on saturated parts of the accuracy-vs-resource curve while starving steep ones. We ask:
25+
26+
- **Q1 (Fairness):** Where does the next unit of resource raise accuracy the most?
27+
- **Q2 (Multivariate Utility):** How can we substitute multiple types of ML resources available?
28+
29+
30+
<figure style="text-align: center;">
31+
<img src="{{site.baseurl}}/images/post/FS_sysarch.png" alt="FairSynergy System Plot" height="auto" style="margin: auto; display: block;">
32+
<figcaption>FairSynergy Overview</figcaption>
33+
<p></p>
34+
</figure>
35+
36+
37+
## Contributions
38+
We introduce FairSynergy, a novel framework to allocate cloud resources fairly across the intelligent heterogeneous agents.
39+
- **Diminishing Marginal Returns:** We empirically show that learning and inference curves across vision and language models consistently have the same concave pattern: accuracy improves with more resources (e.g., model size, data, compute) but with diminishing returns.
40+
- **Fair Allocation in ML:** With a concave objective, Network Utility Maximization (NUM) is the natural fit. It mathematically implies the condition turns into an intuitive policy: allocate so each agent’s next unit of resource yields the same marginal accuracy gain.
41+
- **Cobb-Douglas:** Like capital and labor in production, compute/model capacity and labeling/data curation both drive accuracy, each with diminishing returns and substitution. We form a multivariate utility capturing this to co-allocate the resources. (Figs. 5–6)
42+
43+
<figure style="text-align: center;">
44+
<img src="{{site.baseurl}}/images/post/FS_concavetraining.png" alt="Concave Training" height="auto" style="margin: auto; display: block;">
45+
<figcaption>The Law of Diminishing Marginal Returns: Training</figcaption>
46+
<p></p>
47+
</figure>
48+
49+
<figure style="text-align: center;">
50+
<img src="{{site.baseurl}}/images/post/FS_concaveinference.png" alt="Concave Inference" height="auto" style="margin: auto; display: block;">
51+
<figcaption>The Law of Diminishing Marginal Returns: Inference</figcaption>
52+
<p></p>
53+
</figure>
54+
55+
## FairSynergy Framework
56+
- **Inference Setting (RTI) Univariate Case (Compute):** At short intervals, the framework estimates each agent’s next-unit accuracy gain from extra cloud compute. Give the next unit to the highest gain, repeat until gains are roughly equalized—then reshuffle as conditions change. This hits the fairness/efficiency sweet spot without heavy tuning.
57+
- **Learning Setting (DL) Bivariate Case (Compute + Labeling Effort):** The framework uses the same “next-unit” idea with a quick two-step loop: hold labels fixed and split compute by the one-resource rule; then hold compute fixed and split labeling time by the same rule. A few rounds settle to a stable co-allocation, so compute-hungry agents get cycles and data-hungry agents get labels.
58+
- **Handling Heterogeneity:** Harder tasks show larger early gains, so the allocator leans into them first and naturally rebalances as gains even out. The result is proportional fairness and fleet-level accuracy that scales with more agents and changing workloads—no math knobs to tune.
59+
60+
<figure style="text-align: center;">
61+
<img src="{{site.baseurl}}/images/post/FS_cobbdouglas.png" alt="Multivariate Objective" height="auto" style="margin: auto; display: block;">
62+
<figcaption>**Extending Multivariate ML Utility**: Cobb-Douglas Production Function For a Given Capital and Labor</figcaption>
63+
<p></p>
64+
</figure>
65+
66+
## Results
67+
We compare our method to common baselines and standard fair allocation methods:
68+
- **Fair-Synergy (Ours)** allocates compute (and labels) to equalize next-unit accuracy gains per agent using fitted concave utilities.
69+
- **Random Allocation** splits the available compute (and labels) at random among agents.
70+
- **Uniform Allocation** gives every agent the same share of compute (and labels), ignoring local differences.
71+
- **Classical NUM** optimizes a uniform log utility (identical response curves), so allocation follows equalized marginal gains without task-specific reweighting and agent heterogeneity.
72+
- **Dominant Resource Fairness (DRF)** equalizes each agent’s dominant resource share, targeting max–min fairness across resources.
73+
- **Leximin** prioritizes the worst-off first, maximizing the minimum utility, then the next, and so on. It is a stricted form of fairness compared to other methods.
74+
75+
<figure style="text-align: center;">
76+
<img src="{{site.baseurl}}/images/post/FS_result_boxplot.png" alt="Results Boxplot" height="auto" style="margin: auto; display: block;">
77+
<figcaption>How well does Fair-Synergy perform compared to the benchmarks?</figcaption>
78+
<p></p>
79+
</figure>
80+
81+
<figure style="text-align: center;">
82+
<img src="{{site.baseurl}}/images/post/FS_result_lineplot.png" alt="Results Lineplots" height="auto" style="margin: auto; display: block;">
83+
<figcaption>How does Fair-Synergy scale with increasing number of agents?</figcaption>
84+
<p></p>
85+
</figure>
86+
87+
### Key findings:
88+
- **Fairness-Accuracy Trade-off:** We observe for stricter fairness conditions, while the worst case accuracy increases the aggregate accuracy decreases. Hovewer, we achieve the best trade-off by obtaining both the best performance and necessary fairness conditions.
89+
- **Scalability:** As fleets grow, our method converts heterogeneity into higher accuracy unlike (+6% RTI, +30% DL) and Uniform/DRF/Leximin (+13% RTI, +51% DL) at 100 agents.
90+
91+
92+
## Impact
93+
Fair-Synergy treats fairness as physics, not philosophy. Fairness means no single agent experiences an increase in its accuracy while reducing the other's accuracy more. As accuracy is concave, the right thing is to spend cloud resources where marginal gains are steepest and to do so optimize jointly over multiple substitutable resources. A fair allocation is the most efficient allocation because concavity makes "equalize marginal gains" optimal.

_posts/2025-10-10-CPS.md

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
---
2+
title: 'Constrained Posterior Sampling: Time Series Generation with Hard Constraints'
3+
description: NeurIPS 2025 paper
4+
categories: blog
5+
---
6+
7+
*By Sai Shankar Narasimhan*
8+
9+
# Forcing Diffusion Models to Respect the Rules: Constrained Posterior Sampling for Time Series
10+
11+
Proceedings of the 39th Conference on Neural Information Processing Systems (NeurIPS 2025)
12+
13+
**TLDR:** Constrained Posterior Sampling (CPS) is a diffusion-based sampling algorithm that aims to generate time series samples that can satisfy user-specified constraints.
14+
15+
[arXiv](https://arxiv.org/abs/2410.12652) |
16+
17+
18+
## Motivation
19+
Imagine you are a quantitative researcher who wants to stress-test trading strategies. You would want access to a tool that can precisely generate high-fidelity stock price time series for prompts like
20+
21+
**Generate Tesla’s stock price for the next month with 5% volatility.**
22+
23+
Additionally, for the stocks domain, there are some inherent rules that a generated sample should adhere to, such as the opening and closing prices for a day should be bounded by the highest and lowest prices for that day. This problem exists in almost all engineering domains, where certain domain-specific constraints may arise due to the laws of physics or the underlying process. We refer to this as the **constrained time series generation problem**.
24+
25+
## Expectations Of A Desired Approach
26+
27+
- **Training-free:** Training a generative model for specific constraints lacks scalability. For example, a model trained for mean constraints cannot easily adapt to argmax constraints.
28+
- **Hyperparameter-free:** The choice of guidance weights in guidance-based approaches with diffusion models significantly affects the sample quality. However, optimizing these weights becomes computationally intractable when dealing with hundreds of constraints.
29+
- **Should be Independent of external models:** Prior works often employ external models, like the discriminator in Generative Adversarial Networks (GANs), alongside the generative model to enforce realism, necessitating additional training and complex sampling procedures.
30+
31+
32+
<figure style="text-align: center;">
33+
<img src="{{site.baseurl}}/images/post/cps_intro_fig.png" alt="High level description plot" height="auto" style="margin: auto; display: block;">
34+
<figcaption>Figure 1 - High-Level Description of Constrained Posterior Sampling (CPS): Here, we show an example where CPS generates a daily stock price time series with natural constraints, such as the bounds on the opening and closing prices of the stock.</figcaption>
35+
<p></p>
36+
</figure>
37+
38+
This post features **Constrained Posterior Sampling**, a training-free, plug-and-play approach that enforces hard constraints on the outputs from your diffusion generative models without compromising sample quality.
39+
40+
## Constrained Posterior Sampling
41+
42+
CPS builds on diffusion models – a class of generative models that iteratively refine noise into data. In a diffusion model, you start with random noise and repeatedly “denoise” it to produce a sample. **The key insight of CPS is: after each small denoising step, gently push the output back to satisfy the constraints before further denoising.** In other words, at every step of the generation, we enforce the rules, at least a little. By the end, the output should follow all the rules by construction.
43+
44+
## Under the Hood: How CPS Enforces Constraints in Diffusion Sampling
45+
46+
Given a specific constraint set from which we aim to generate a realistic time series sample, we perform the following steps:
47+
- **Estimate the posterior mean.** At each denoising step, we obtain an estimate of the clean sample.
48+
- **Project the posterior mean estimate.** We project the posterior mean estimate towards the constraint set. This is performed using off-the-shelf optimization routines.
49+
- **Perform DDIM sampling.** Using the projected posterior mean estimate, we obtain the noisy sample for the next denoising step using DDIM sampling. We refer the reader to our manuscript for further technical details.
50+
51+
This process continues for all the denoising steps. During projection, CPS performs an unconstrained optimization step that ensures minimal perturbation of the posterior mean estimate such that the constraint violation is reduced. This is concisely depicted in Figure 2.
52+
53+
<figure style="text-align: center;">
54+
<img src="{{site.baseurl}}/images/post/cps_approach_fig.png" alt="CPS Approach" height="auto" style="margin: auto; display: block;">
55+
<figcaption>Figure 2 - Constrained Posterior Sampling: We show the graphical model for one step of denoising in CPS: check Algorithm 1 in our manuscript.</figcaption>
56+
<p></p>
57+
</figure>
58+
59+
To ensure minimal effects on the sample quality, we introduce penalty coefficients that minimize the perturbations of the posterior mean estimate during the initial denoising steps when the signal-to-noise ratio is very low. Towards the final denoising steps, the penalty coefficients are very large to ensure constraint satisfaction.
60+
61+
Note that CPS satisfies all the requirements listed above for a desired approach.
62+
63+
- CPS is a **training-free procedure** and can be extended to any constraint class.
64+
- The proposed approach is **hyperparameter-free**. The penalty coefficients are fixed using the noise schedule of the diffusion model, and **CPS leverages existing off-the-shelf optimization solvers** to perform the projection step.
65+
- CPS does not require any external model to improve sample realism. In fact, we note that **the adverse effects of projection are mitigated through the subsequent denoising steps**.
66+
67+
68+
69+
## Applications and Results Across Domains:
70+
71+
In our work, we showcase the efficiency of CPS in terms of sample quality and diversity on six diverse datasets, spanning environmental, traffic, and finance domains. Particularly, in terms of sample realism and utility metrics (check Figure 3), we show that CPS outperforms SOTA approaches by 10%.
72+
73+
<figure style="text-align: center;">
74+
<img src="{{site.baseurl}}/images/post/cps_results_1.png" alt="CPS Main Qualitative Results" height="auto" style="margin: auto; display: block;">
75+
<figcaption>Figure 3 - Main Qualitative Results: CPS provides high-fidelity synthetic time series samples that match real time series data. The real test samples from which the constraints are extracted are shown in blue. The samples generated using the extracted constraints are shown in red. Across all datasets, the baselines suffer from the adversarial effects of the projection step, whereas CPS generates high-quality samples.</figcaption>
76+
<p></p>
77+
</figure>
78+
79+
Additionally, in terms of tracking real time series samples, we show that CPS outperforms SOTA methods by 42%. Specifically, CPS does not suffer from sample quality degradation for a large number of constraints, while other approaches break down in such settings (Figure 4). We refer the readers to Figures 3 and 4 to observe the sample quality and tracking abilities of CPS.
80+
81+
<figure style="text-align: center;">
82+
<img src="{{site.baseurl}}/images/post/cps_results_2.png" alt="CPS Main Qualitative Results" height="auto" style="margin: auto; display: block;">
83+
<figcaption>Figure 4: CPS tracks the real data samples as the number of constraints increases. Increasing the number of constraints reduces the size of the constraint set, and an ideal approach should effectively generate samples that resemble the real time series samples that belong to the constraint set. Here, we show a qualitative example from the Stocks dataset. Observe that CPS accurately tracks the real sample that concurs with the specified constraints, while other approaches suffer.</figcaption>
84+
<p></p>
85+
</figure>
86+

images/post/FS_cobbdouglas.png

3.16 MB
Loading
845 KB
Loading

images/post/FS_concavetraining.png

1.48 MB
Loading

images/post/FS_result_boxplot.png

228 KB
Loading

images/post/FS_result_lineplot.png

535 KB
Loading

images/post/FS_sysarch.png

374 KB
Loading

0 commit comments

Comments
 (0)