@cadenza_labs
We are a new AI Safety Org, focusing on Conceptual Interpretability
https://cadenzalabs.org/$0 in pending offers
The goal of our group is to do research which contributes to solving AI alignment. Broadly, we of course aim to work on whatever technical alignment projects have the highest expected value. Our current best ideas for research directions to pursue are in interpretability. More about our research agenda can be found here.
For | Date | Type | Amount |
---|---|---|---|
Cadenza Labs: AI Safety research group working on own interpretability agenda | 11 months ago | project donation | +5000 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +100 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +10 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +100 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +790 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +1000 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +210 |
Cadenza Labs: AI Safety research group working on own interpretability agenda | 12 months ago | project donation | +500 |