The second Workshop on Compiler Techniques for Sparse Tensor Algebra (CTSTA) aims to bring together researchers interested in compiler techniques, programming abstractions, libraries/frameworks, algorithms, and hardware for sparse tensor algebra and sparse array programs. Sparse tensor algebra is widely used across many disciplines where performance is critical, including scientific computing, machine learning, and data analytics. Due to the large number of applications, optimization techniques, types of data structures, and specialized hardware, there is a need for automation. In recent years, there has been a lot of interest in compiler techniques to automatically generate sparse tensor algebra code. This workshop aims to bring together leading researchers from academia and industry for talks on applications, code generation, source code transformation and optimization, automatic scheduling, data structure modeling, compilation to different types of hardware, specialized accelerators, extensions to new types of sparse array operations, and applying the techniques beyond sparsity to areas such as lossless compression. The workshop will last one day and will include invited talks, discussion, and submitted talks.

Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

07:30 - 09:00
BreakfastCatering at Royal
07:30
90m
Other
Breakfast
Catering

09:00 - 11:00
09:00
5m
Day opening
Introduction
CTSTA
Fredrik Kjolstad Stanford University
09:05
15m
Talk
Software and Hardware for Sparse ML
CTSTA
Fredrik Kjolstad Stanford University
09:20
15m
Talk
Integrating Data Layout into Compilers and Code Generators
CTSTA
Mary Hall University of Utah
09:35
15m
Talk
Tackling the challenges of high-performance graph analytics at compiler level
CTSTA
Gokcen Kestor Pacific Northwest National Laboratory
09:50
10m
Panel
Discussion
CTSTA

10:00
5m
Break
BreakSocial
CTSTA

10:05
15m
Talk
Challenges and Opportunities for Sparse Compilers in LLM
CTSTA
Zihao Ye University of Washington
10:20
15m
Talk
The Sparse Abstract Machine
CTSTA
Olivia Hsu Stanford University
10:35
15m
Talk
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators
CTSTA
Nandeeka Nayak University of Illinois at Urbana-Champaign
10:50
10m
Panel
Discussion
CTSTA

11:00 - 11:20
11:00
20m
Coffee break
Break
Catering

11:20 - 12:30
11:20
15m
Talk
Accelerating Sparse Matrix Computations with Code Specialization
CTSTA
Maryam Mehri Dehnavi University of Toronto
11:35
15m
Talk
A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network
CTSTA
Raghavendra Kanakagiri University of Illinois Urbana-Champaign
11:50
15m
Talk
Automatic Differentiation for Sparse TensorsVirtual
CTSTA
Amir Shaikhha University of Edinburgh
12:05
15m
Talk
Compiler Support for Structured Data
CTSTA
Saman Amarasinghe Massachusetts Institute of Technology
12:20
10m
Panel
Discussion
CTSTA

12:30 - 14:00
LunchCatering at Royal
12:30
90m
Lunch
Lunch
Catering

14:00 - 15:30
14:00
15m
Talk
Learning workload-aware cost model for sparse tensor program
CTSTA
Jaeyeon Won Massachusetts Institute of Technology
14:15
15m
Talk
Autoscheduling for Sparse Tensor Contraction
CTSTA
Kirshanthan Sundararajah Purdue University
14:30
10m
Panel
Discussion
CTSTA

14:40
15m
Talk
Fantastic Sparse Masks and Where to Find Them
CTSTA
Shiwei Liu The University of Texas at Austin, Texas, USA
14:55
15m
Talk
Moving the MLIR Sparse Compilation Pipeline into ProductionVirtual
CTSTA
Aart Bik Google, Inc., Peiming Liu Google Inc
15:10
15m
Panel
Discussion
CTSTA

15:25
5m
Day closing
Closing
CTSTA
Fredrik Kjolstad Stanford University, Saman Amarasinghe Massachusetts Institute of Technology
15:30 - 16:00
15:30
30m
Coffee break
Break
Catering

16:00 - 17:50
16:00
1h50m
Poster
Poster Session and Free-Form Discussion
CTSTA

Call for Talks

We are soliciting​ 15 minute​ talks for the second Workshop on Compiler Techniques for Sparse Tensor Algebra (CTSTA)​. Relevant topics include applications, libraries, programming language constructs, compilers, libraries/frameworks, and hardware for sparse tensor algebra. The talks can be technical,​ on new​ ideas,​ on your thoughts ​about​ future ​needs​, or other related topics that you are excited about. Already published ideas are welcome. There will not be a proceeding, so the talks will not require a submitted paper.​​ If you are interested, please submit a short description (100-200 words).