@Adrian-Regenfuss thank you for the update and for the intention to return the funds; I'll follow up with information on how to do that. And congrats on the new job!
@Austin
I work on Manifold & Manifund!
https://manifold.markets/AustinThis is a donation to this user's regranting budget, which is not withdrawable.
$350 in pending offers
Austin Chen
8 months ago
@Adrian-Regenfuss thank you for the update and for the intention to return the funds; I'll follow up with information on how to do that. And congrats on the new job!
Austin Chen
8 months ago
Hi Jeroen! I wanted to thank you for taking the time to post this application. I don't watch much in the way of videos, but I did play through a chunk of your lead poisoning video and found it well-produced and informative. Best of luck towards hitting your funding goal!
Austin Chen
8 months ago
I'm funding this up to the minimum funding bar, based on:
Having met @luiscostigan and hearing about the work of AI Safety Tokyo, while visiting earlier this January
The prominence of the TAIS Conference in Tokyo -- the fact that two of Manifund's AI Safety regrantors ( @DanHendrycks and @RyanKidd ) are going, and that Scott reposted about it on his most recent open thread, are both strong signals of the conference's value.
Holding regular weekly study sessions might seem like a small thing, but I really respect the dedication it shows!
I'm happy to buy this as retroactive impact certificate; I don't know if the large retro funders in this round are excited to buy back first-year impact (I hope they will be!), but either way I want to support this work.
Austin Chen
8 months ago
Crossposting some notes on this project! First a tweet from @toby, explaining his decision to fund this project:
Since we worked on a review of Optimism's RGPF design I am interested to see how the new Manifund social impact certs/bonds will work. I applied to be a regranter on this thing and spent all my funds on Jessica's project to start a journal focused on paradigm development in psychiatry. It's a very important project and totally tracks with my goals for Care Culture.
I would encourage other regranters to also consider funding this. It's well outside of classic EA / rationalist thinking. It's not problem with a very clear in-out impact model, personnel and scenius development will be the decisive factor. It matters to have Jess working on this!
And second an endorsement from @lily, also posted in our newsletter:
Jessica’s project takes up the mantle of a favorite crusade of mine, which is “actually it was a total mistake to apply the scientific method to psychology, can we please do something better.” She’s written extensively on psychiatric crises and the mental health system, and I would personally be excited to read the work of people thinking seriously about an alternative paradigm. I’m not sure whether the journal structure will add anything on top of just blogging, but I’d be interested to see the results of even an informal collaboration in this direction.
(Note that I probably wouldn’t expect the SFF or LTFF to fund this; ACX Grants 2025 maybe, and the EAIF I’m not sure. But I’d be happy to see something like it exist.)
Austin Chen
8 months ago
@briantan appreciate the update, especially how in-depth it is; this looks like good progress and I'm excited for the rest of your program!
Austin Chen
9 months ago
I really like that Cam has already built & shipped this project, and it appears to have gotten viral traction and had to be shut down due to costs; rare qualities for a grant proposal! The project takes a very simple premise and executes well on it; playing with the demo makes me want to poke at the boundaries of AI, and made me a bit sad that it was just an AI demo (no chance to test my discernment skills); I feel like I would have shared this with my friends, had this been live.
Research on AI deception capabilities will be increasingly important, but also like that Cam created a fun game that interactively helps players think a bit about how for the state of the art has come, esp with the proposal to let user generate prompts too!
Austin Chen
9 months ago
I like this project because the folks involved are great. Zvi is famous enough to almost not need introduction, but in case you do: he's a widely read blogger whose coverage of AI is the best in the field; also a former Magic: the Gathering pro and Manifund regrantor. Meanwhile, Jenn has authored a blog post about non-EA charities that has significantly shaped how I think about nonprofit work, runs an awesome meetup in Waterloo, and on the side maintains this great database of ACX book reviews. (seriously, that alone is worth the price of admission)
I only have a layman's understanding of policy, economics or academia (and am slightly bearish on the theory of change behind "publish in top journals") but I robustly trust Zvi and Jenn to figure out what the right way to move forward with this.
Austin Chen
9 months ago
Brandon walks the walk when it comes to education; his ACX Book Review contest entry on the subject was not only well written, but also well structured with helpful illustrations and different text formats to drill home a point. (And the fact that he won is extremely high praise, given the quality of the competition!) I'm not normally a fan of educational interventions as their path to impact feels very long and uncertain, but I'd be excited to see what Brandon specifically can cook up.
(Disclamer: I, too, have some skin in the game, with a daughter coming out in ~July)
Austin Chen
9 months ago
@DrAmbreenDeol awesome, thanks for the response! I've now approved your project.
Austin Chen
9 months ago
Hi @DrAmbreenDeol! Thank you for submitting this proposal; it's notable that @Kunvar is excited for this and offering $8.5k for this study.
We'll most likely be able to approve as within the bounds of our 501c3 mission, but there's two pieces of information that we're missing (questions that we ask as part of our standard grant proposal):
1) Who is on your team and what's your track record on similar projects?
2) What other funding are you or your project getting?
Austin Chen
9 months ago
This was one of the grants I had feedback on during the initial review process, reposting here:
FWIW, I was pretty impressed with the quality of Fluidity Forum speakers & attendees (eg Jane Flowers, Milan Griffes, AKC). Unfortunate that it overlapped with Manifest 2023 :'(. I would be tentatively interested in seeing the videos, but this might be just aspirational - I haven't even made it through the backlog of Manifest videos.
I gave this a grant a 3 on Scott's 0-4 scale: "Good, plausible grant: recommend if money available and further research is positive"
Austin Chen
9 months ago
Approving this grant as in line with our work towards funding research on AI safety!
The process for this grant was a bit unusual - instead of the grant being initiated by the grantee, Jueyan approached us asking if we would be willing to fiscally sponsor this. After looking into this researcher's background and work, we decided that it would be in line with our normal area of charitable work, and agreed to facilitate this (with a 5% fiscal sponsorship fee.)
Jueyan expressed an interest in getting this grant out the door ASAP, which is why there's only the one-sentence description for now; he's offered to put in a longer writeup later.
Austin Chen
9 months ago
I asked a friend with more bio expertise to take a look, here was the feedback:
1. I need to see the other BLAST results, because the legionella one seems cherry picked. I'd need to see if there are other proteins that are closer matches that he ignored because he couldn't come up with a hypothesis for them.
2. The BLAST result doesn't seem great even out of context. A short sequence with 2 mismatches and 2 gaps is not a great match. If he could show the 3D structure is similar that would be a good next step, but as is it's not great.
3. He has good epidemiological data for T1D but relies on random news stories and an out of context journal article for his Legionella's prevalence. He would need to come up with some comparable maps of Legionella's prevalence and show they line up in some way.
4. These graphs don't match up and he doesn't have a good explanation.
Austin Chen
9 months ago
Hi Stephen, thanks for submitting this project! Bio funding is very much outside my personal area of expertise, but I'll ask around and see if anyone in the field might be willing to lend their eyes on reviewing this.
To set expectations: we haven't issued regrantor budgets for 2024 yet, as we're still fundraising ourselves. It's a shame that this proposal missed the recent ACX Grants round, as it would have been a great fit - but with the upcoming ACX Grants Impact Certs side, there may be an influx of bio-curious investors/donors interested in this.
Also, I really enjoyed the bits of humor in your proposal - as someone who's fallen backwards into reading lots of these things, it's so nice when a proposal is a delight to read on its own.
Austin Chen
9 months ago
@wasabipesto for some context, SirCryptomind was asking whether Manifold could hire him for his moderation work; while we didn't want to bring on an ongoing, fulltime paid position for this at the moment, I encouraged him to submit an entry for retroactive funding for his mod work as part of the Community Fund. The community fund hasn't paid out our third round yet and I expect SirCryptomind's work to fall within scope for this.
Austin Chen
9 months ago
Funded this with $2.5k and approving this! This falls within the category of "encourage interesting scientific experiments" and is low-budget, so it's a cheap bet to see what this team can accomplish. I'm glad they are releasing their work as open source too (though would love to see a link somewhere!)
Austin Chen
9 months ago
I'm donating a token amount for now to signal interest, and get this project across its minimum funding bar. I have not listened much to The Inside View, but the guests Michael have attracted are quite good (eg recently Manifund regrantor @evhub and grantee @Holly_Elmore). The production quality seems pretty good at a glance (with editing & transcripts available). I also really like that Michael has also been consistently doing these for a year; I could imagine wanting to fund this project a lot more, upon further research or testimonials.
My primary misgivings are the low-ish view counts on Youtube, and uncertainty on whether Michael's videos have been helpful for others - this is where more testimonials like Cameron's are helpful!
Austin Chen
10 months ago
Reposting my notes on this while evaluating for ACX:
Okay so Scott put "Ask Austin?" for this, but really I feel much more qualified to evaluate software/startup proposals rather than forecasting ones. Also despite founding a prediction market startup, I'm not, like, an inherent cheerleader for forecasting, and actually have some deep skepticisms about the viability of Tetlock-style "forecasting for government policy"; such approaches seem sexy but like, if corporations aren't effectively using forecasting internally, I'm skeptical that the govt will be able to do so too.
So with those biases in mind: I'm not especially excited by the proposal, but if anyone seems like the right person to do this kind of thing, it seems like S has the right background for it. I would be extremely happy if they succeeded at convincing policymakers to take forecasts more seriously. The win condition would be similar to that from Dylan Matthew's talk at Manifest: legitimizing forecasts in the eyes of people who work on policy. My hesitancies are 1) I'm not sure if funding this would make such adoption likely to happen (it seems like a long shot), and 2) as above, I'm not even that sure that such adoption would be significantly beneficial to the world.
Austin Chen
10 months ago
My comments as an ACX evaluator:
I like Tetra a lot, based on their writing and Manifold usage; I strongly considered offering them a Manifund regrantor budget (and would have done this if our overall budget pool was like 20% larger). That said, I'm a bit skeptical that 1) assurance contracts are a huge unmet need, or 2) they'll be able to create a sufficiently-nice-looking platform. I think "platform that looks nice" is actually very tricky but also necessary for wide adoption.
(I'd feel much better about point 2 if they would pair up with someone whose specialty is web design)
Since then it looks like Tetra is working with Jordan of Spartacus, which seems like a great fit (I would have suggested this if Scott didn't)! I'm a little unsure if "jam two people with no prior experience of collaborating" will actually work well, but tbh this kind of describes me and James/Stephen prior to ACX Grants, so there's at least some precedent. Best of luck!
Austin Chen
10 months ago
Approving this project as in line with our work on AI Safety! I think this is a pretty compelling writeup, and a few people who I trust are vouching for the organizers.
Notably, Remmelt and Linda made an excellent fundraising appeal on EA Forum -- they were very successful at turning past successes into a concrete appeal for funding, drawing in donations from many members of the EA community, rather than a single large donations from established funders. I'm very happy that Manifund can help with this kind of diversified fundraising. (I also appreciate that Linda has written up recommendations for other projects she finds compelling, including some on our site!)
Austin Chen
10 months ago
@josephbloom Thanks for posting this update! Your grant was one of the very first grants made through Manifund's regranter program, and I'm quite happy to see your follow ups. I especially appreciate you staying in touch with Marcus and Dylan to give them a sense of how their grants are being used as well as your next research steps.
re: compute funding, I imagine you've already seen Superalignment Fast Grants; it seems like a good fit for your ask and I'd highly encourage you to apply (Leopold, who I believe is running the program, is also a Manifund regrantor!)
Austin Chen
10 months ago
@NeoOdysseus I've lowered your minimum funding requirement to $2,500, as requested.
Austin Chen
10 months ago
Hi Claire! Just wanted to note that Athena looks super cool, and I'm glad Evan Hubinger seems to think so as well. Successfully building out a mentorship program and support network can be tricky, especially to establish something with lasting impact; I'm happy to see that you have many different kinds of relevant experience. Hoping to see you succeed here, and let us know if we can help!
Austin Chen
10 months ago
Approving this project; echoing Greg, I think AI Plans has made good progress (eg with its site design) since I last saw them. I also like some of the judges they chose for their December critique-athon, such as Nate Soares and Tetraspace.
Austin Chen
11 months ago
Approving this project as appropriate under our charitable mission in the cause area of AI Governance. It's good to see the endorsements from @ZachSteinPerlman and tentatively @MarcusAbramovitch, as two people who I think are very clued-in to this space!
Austin Chen
11 months ago
Approving this project as fitting within our charitable mission in the forecasting cause area! I've previously spoken with Marcel van Diemen before, who struck me as very motivated and entrepreneurial. I think Base Rate Times started very strong right out the gate -- it got a lot of retweets and mentions on EA/rat twitter, which is rare for a forecasting project. My major area of concern is that I'm not yet sold on whether there is repeated demand for the BRT form factor, vs just a novelty thing that gets linked to once and then not checked in the future. In any case, best of luck with BRT!
Austin Chen
11 months ago
I also think "seeding a wiki" or "having a LW dialogue" might be an interesting addendum or alternative to "writing a FAQ". A wiki might allow more participation for people with different perspectives (though perhaps loses out on coherence of vision), while the LW dialogue format might be a good fit for getting to the heart of disagreements and nuanced takes.
Austin Chen
11 months ago
Hey Isaac! I think this is an interesting proposal, and am funding this partway to see if others agree that this kind of thing would be useful and good.
I think e/acc is intellectually interesting (and am intrigued by some ideas eg silicon descendents would be no morally worse than biological descendents), and would like to have a clearer understanding of what the key principles of its proponents are. Collaborating with core e/accs and EAs to get a balanced perspective sounds like a good idea (otherwise I'd worry that the FAQ would come across as a bit of a caricature).
Austin Chen
11 months ago
@NunoSempere I also think that Apart is interesting; at the very least, I think they have an amount of "gets things done" and marketing power that otherwise can be missing from the EA ecosystem. And they have a really pretty website!
I am similarly confused why they haven't received funding from the usual suspects (OpenPhil, LTFF). On one hand, this makes me concerned about adverse selection; on the other, "grants that OP/LTFF wouldn't make but are Actually Good" would be an area of interest for Manifund. I would be in favor of someone evaluating this in-depth; if you plan on doing this yourself, I'd offer to contribute $2k to your regrantor budget (or other charitable project of your choice eg ALERT) for a ~10h writeup.
See also two previous Manifund projects from Apart's leadership:
Austin Chen
11 months ago
Hi Mick, this seems like an interesting proposal; did you mean to submit to the Manifold Community Fund for consideration for prize funding? If not -- I'd encourage you to do so, as it seems a better fit for that than general charitable funding. (Note that this would require recreating the proposal as an entry, since we don't support migrating grants to impact certs at this time unfortunately)
Austin Chen
11 months ago
And approving this project, as furthering our cause of fostering the EA community!
Austin Chen
11 months ago
@wasabipesto note that the funding deadlines are somewhat loose - we can extend that for you if you want to give investors a few more weeks, esp since we've been late on the first round of evals (orz)
Austin Chen
11 months ago
Approving this project! Kudos to the team at Rethink for incubating this, and to Coby and Aishwarya for getting this off the ground.
Austin Chen
12 months ago
I definitely think the Manifold API experience could be greatly improved and mentioned on Discord that a guide that gets users to create bots would be great! So I am offering $200 as a token of support towards that. I do think the financial numbers on this proposal may not make a lot of sense from the investor's perspective though; a $10k total valuation implies that 1/3 of all of the value of Manifold Community Fund's $10k pool will be awarded to this project, which doesn't seem super likely.
@nikki, I might suggest lowering at least the minimum funding, and perhaps the valuation too. Right now, at $6000 that implies you would spend 150-300 hours at the minimum on these projects, at your quoted rate of $20-$40; I think it's better to plan to start small (eg 20-40 hours to start) and hold on to more equity, which you can continue to sell once you've started showing results!
Austin Chen
12 months ago
This looks cool! @cc6 has been a fun person to interact with in Manifold, and I love to see people working on projects that solve their own pain points (where I imagine cc6 has lots of expertise). Hooking in to the Common App APIs (which I didn't know existed) seems smart, but I'm sure they can figure out other ways to accomplish "Manifold x College Admission" if that doesn't work out.
I'm funding this most of the way, leaving room for others to express interest; if nobody does in a few days I expect to fill the rest too.
Austin Chen
12 months ago
Funding this to the minimum ask, as it seems like very good bang-for-the-buck and it seems like two people I respect (Gavin and Misha) have gotten value from this. I have lots of uncertainty about the value of starting new hubs vs consolidating in a few regions, but happy to put a bit of funding down towards a cheap experiment.
Austin Chen
12 months ago
Approving! The Aurora Scholarship is exactly the kind of program that we're excited for regrantors to initiate; props to Joel and Renan for driving this.
Austin Chen
12 months ago
(I've also doubled the max funding goal from $4.8k to $9.6k, per Joel's request)
Austin Chen
12 months ago
Approving this project now that it's hit its minimum funding bar. I wasn't aware that Renan and Joel had previously solicited Zhonghao (or set up the Aurora Scholarship, for that matter); both are awesome to hear.
Austin Chen
12 months ago
Approving this project now as it has hit its minimum funding bar, and Talos is definitely in line with our charitable mission in the area of AI Governance. Best of luck with further fundraising!
Austin Chen
12 months ago
I watched Chris's lecture at Foresight Vision Weekend 2023 last week, and found it was an interesting and novel way to think about agents. Very early stage, but I could believe that there's a chance that it helps me and others better understand coordination across a variety of agents (AIs, humans, orgs, etc). I also met Evan Miyazono at the same conference, and was impressed by his track record and energy (Evan and I scheduled time to chat about one of his proposals later this week)
Chris is also a friend of Rachel and I, which cuts both ways: I trust Chris as a person, but don't want to fund too much of this project myself to avoid conflicts of interest, thus the $1k donation for now.
Austin Chen
12 months ago
@NeelNanda Thanks for weighing in; agreed that the asking amount is very low. I've funded to half of the min funding bar based on your endorsement.
Austin Chen
12 months ago
Approving as a simple research project which could help projects go well.
FWIW, I'm more skeptical than Nuno that work of the form "make large lists of project ideas" is useful, as execution (not ideas) are almost always the bottleneck. But as usual, happy to be proven wrong!
Austin Chen
12 months ago
@MarcusAbramovitch Note that this was not true at the time that LTFF made the grant to Manifold (Feb 2022) -- we had launched just a couple months ago, had not yet incorporated, and the only funding we'd received were grants ($20k from Scott Alexander and $2k from Paul Christiano). The $200k from LTFF was a strong credible signal that the EA community cared about our work.
You can see more about Linch's reasoning for Manifold here. I think it holds up quite well (very biased obv), and I would be extremely happy if Manifund or LTFF or others in EA could figure out how to send six figures to similarly good teams.
One more recent point of comparison might be Apollo Research, which is also seeking similar amounts of grant funding while also thinking about the for-profit route down the line.
Austin Chen
12 months ago
@case for now, I'd actually suggest changing this project to represent "umbrella for Case's Manifold contributions" and include the first week churn as an example of the work. Off the top of my head, the Bounty hackathon project an various open source contributions could also be eligible if you bundle them into this project
Austin Chen
12 months ago
Approving this project! I was excited by the original launch of ALERT (and applied as a reservist, I think). I think the idea is good, but as they say in startupland "execution is everything" - best wishes on the execution, and let us know if we can help
Austin Chen
12 months ago
I would be very interested in reading the results of this survey, to better understand how to position EA and longtermism! I appreciate especially that there is an established team with a good track record planning to work on this, and that they would publish their findings openly.
I'm funding half of the required ask atm, since I feel that other regrantors or funders in the EA space would be interested in participating too. (Also, my thanks to @Adrian and @PlasmaBallin for flagging that this proposal has been underrated!)
Austin Chen
12 months ago
Approving this as being in line with Manifund's charitable purpose! Happy to see that Joel and Gavriel like Greg's work in this space.
Austin Chen
12 months ago
Approving this project as falling within Manifund's charitable mission in fostering biosecurity research.
Austin Chen
12 months ago
Approving this as it falls within our purview of technical AI safety research. Best of luck with your research, Robert!
Austin Chen
12 months ago
@NeelNanda thanks for weighing in! Manifund doesn't have a UK entity set up, unfortunately. One thing that might be possible would be to figure out a donation swap where eg you commit to donating $10k via Givewell UK and some US-based person who was planning on giving to Givewell instead donates $10k to this project, and you both take tax deductions for your respective countries.
Austin Chen
12 months ago
@Brent It's not clear to me what successful examples are... which have been impactful for you? I think foreign language and MCATs are two domains where SRS have proven its worth, but outside of that those memorization-heavy domains, the flashcard approach hasn't become popular. It's also damning that most successful people don't rely on SRS, afaict.
I think there's something definitely interesting about the core observation of SRS - "learning happens via repeated exposures to the subject, and we can program that to our benefit." But it also seems to me that "flashcards" are a dead end, UX-wise, given all the research that has gone into them for relatively little adoption. I think there's a lot of space for innovating on other interaction models -- eg in what ways are social feeds like Twitter a spaced repetition system? Or Gmail?
One other random note - for a while, I've wanted a SRS/anki thing that helps me stay on top of my various personal contacts (friends, acquaintances, etc). "Making friends" is a domain which lines up neatly with exponential backoff, I think -- it's easiest to make a friend by spending a lot of time with them in the beginning, and then staying in touch gradually less and less over time.
Austin Chen
12 months ago
This grant falls outside of our more established pathways, but I'm excited to approve it anyways, as a small bet on a people-first funding approach (where I think the regranting mechanism shines).
I'm a bit baseline skeptical of SRS/anki, having seen tools-for-thought people push for it but fairly unsuccessfully -- eg I was very excited for Quantum Country but it doesn't seem to have gotten wider adoption, nor personally helped me very much. However, I would be excited to be wrong here, and it's possible that LLMs change the game enough for there to be a good angle of attack!
Austin Chen
12 months ago
Approving this project as in line with our mission of advancing technical AI safety.
Thanks to Vincent for getting this project past its initial funding bar!
Austin Chen
12 months ago
@NeoOdysseus Hi Giuseppe, I've pushed back your funding deadline by a month to Jan 21!
Austin Chen
12 months ago
Approving this project, as Lawrence's work falls squarely within Manifund's cause of advancing technical AI safety!
Austin Chen
about 1 year ago
Also investing a small amount as a show of support for Conflux, though I'd definitely love to see more details eventually :P
Austin has stated that Manifold is willing to retroactively fund some of my past projects
To clarify, the Manifold Community Fund payout criteria will be for impact realized between checkpoints, so exclusive of past "impact". The first payout will assess impact from Nov 15-Dec 15 -- so previous eg views of MMP would be excluded, but if an old MMP episode went viral on Dec 1st, then that would count for impact.
Austin Chen
about 1 year ago
This looks like a cool idea and I'm excited to see what Joshua and N.C. Young have in store, as both are longtime active members of the Manifold community! I'm investing a small amount, mostly to leave space for other investors to join in as well.
Note on using the funds for mana: I tentatively think Manifold will also be able to provide mana grants for the Manifold Community Fund projects for boosts/subsidy/prizes/etc, so long as it doesn't end up being distortive on the broader Manifold ecosystem. Still need to figure out the general guidance for mana grants, but don't hesitate to ask!
Austin Chen
about 1 year ago
Funding this to the minimum ask, mostly because 1) the ask is small, 2) I highly trust two of the people involved (Joel and Vivian), and 3) I want to encourage the existence of Qally's, as I could imagine Manifund itself being a client looking to buy retrospective analyses.
I'm actually not sure how big of an issue Long Covid is -- my uniformed take is "not a big problem". But this mostly stems from my emotional reaction against covid safetyism, and isn't very grounded in factual analysis, so I'm excited to see what the research shows!
Austin Chen
about 1 year ago
Hi Chris! Thanks for posting this funding application. I generally am a fan of the concept of retroactive funding for impactful work (more so than almost anyone I know). However, TAIS isn't my area of specialty, and from where I'm standing it's hard for me to tell whether this specific essay might be worth eg $100 or $1000 or $10000. The strongest signals I see are the 1) relatively high karma counts and 2) engagement by @josephbloom on the article.
I'm putting down $100 of my budget towards this for now, and would be open to more if someone provides medium-to-strong evidence for why I should do so.
Austin Chen
about 1 year ago
I'm fairly sure that Scott would be happy to allow you to hold on to your current shares, the caveat that if you don't accept this current offer, he may not make any other assessment or offer in the future.
Austin Chen
about 1 year ago
Hi Adrian! Thanks for submitting this proposal. I'm not actually sure why people are downvoting you -- I do think this kind of project idea is pretty cool, and I would love to see & fund examples of "good rationalist ideas actually making it into production".
That said, in startups, the mantra is "ideas are cheap, execution is everything". To that end, I'd be unsure as a funder if you'd be able to spin up a business around this. A few things:
It seems like you haven't built a lumenator before? I'd suggest trying this just as a proof point of "yes I actually can make & enjoy making hardware"
Validate demand for lumenators! Just because a bunch of EA people have said nice things about them doesn't mean that they would actually buy them; or that the audience extends beyond EA. Before committing to this, see if you can eg pre-sell 10 lumenators to people willing to put down $100 today for a $200 discount on delivery.
The "Tesla Roadster" strategy could make sense here -- even if your goal is to get them <$500 for mass market, to start with you might sell bespoke custom lumenators at $2k to the rich rationalist folks first.
Stop worrying about legal issues, 99.9% of the time this project fails because you can't build lumenators cheaply enough or you fail to find demand.
If you haven't run any kind of side project before, I might start with software -- much cheaper to try and release things, so you learn about the other sides of entrepreneurship (marketing, selling, customer support, biz processes) much faster
Find a cofounder? I'm less sure about this one, but it's standard YC advice, and in my experience projects done in teams have a way of going much farther than projects done solo.
If you actually succeed on 1 & 2, that would be a major update for me towards wanting to invest in your company -- I'd probably invest $10k, at least. Some resources for you:
Michael Lynch's blog, an ex-Google software eng who started a solo biz selling
Austin Chen
about 1 year ago
Approving this! Nuno called this out as one of the projects he was most excited to fund in his regrantor app, and I'm happy to see him commit to this.
Austin Chen
about 1 year ago
I'm funding half of the requested $10k ask based on my prior experience chatting with Gabe (see writeup here); Gabe didn't actually withdraw that money at the time, so I'm happy to follow through on that now.
Austin Chen
about 1 year ago
I've updated Jonas's comment above. Evan is also retracting his support for this grant, so we will be unwinding his $50k donation and restoring this project to be in the pending state.
Austin Chen
about 1 year ago
(for context: Jonas posted his reservations independent of my grant approval, and within the same minute)
Austin Chen
about 1 year ago
In light of Jonas's post and the fact that this grant doesn't seem to be especially urgent, I'm going to officially put a pause on processing this grant for now as we decide how to proceed. I hope to have a resolution to this before the end of next week.
Some thoughts here:
We would like to have a good mechanism for surfacing concerns with grants, and want to avoid eg adverse selection or the unilateralist's curse where possible
At the same time, we want to make sure our regrantors are empowered to make funding decisions that may seem unpopular or even negative to others, and don't want to overly slow down grant processing time.
We also want to balance our commitment to transparency with allowing people to surface concerns in a way that feels safe, and also in a way that doesn't punish the applicant for applying or somebody who has reservations for sharing those.
We'll be musing on these tradeoffs and hopefully have clearer thoughts on these soon.
Austin Chen
about 1 year ago
Approving this project! I also especially appreciated that Kriz set up a prediction market on whether they would get to their higher bar of $37k~
Austin Chen
about 1 year ago
Approving this project! It's nice to see a handful of small donations coming in from the EA public, as well as Evan's endorsement; thanks for all your contributions~
Austin Chen
over 1 year ago
@nmp Ah to be clear, we don't require that projects fit inside our areas of interest to stay listed on Manifund; as many promising projects don't exactly fit. You're welcome to leave up your application if you'd like!
Austin Chen
over 1 year ago
Hi Nigel, appreciate you submitting your proposal to Manifund! I think wildfire detection is somewhat outside the scope of projects that our regrantors are interested in, and as thus you're unlikely to hit your minimum funding bar here. (A precise statement of our interests is tricky but the Future Fund's Areas of Interest is a good starting point). Best of luck with your fundraise!
Austin Chen
over 1 year ago
Approving this project as it fits our criteria of "charitable and few downsides". I think publishing a forecast on the effects of a AI treaty could be very helpful. I am more skeptical of "running an open letter urging governments to coordinate to make an AI safety treaty" -- I'd highly encourage working with other players in the AI governance space, as otherwise I expect the impact of an open letter to be ~nil. (Maybe that was already the plan, in which case, great!)
Austin Chen
over 1 year ago
@JordanSchneider Hi Jordan! Good to know about GiveDirectly's ads -- I think that might be a good form factor for Manifund too, as we're currently looking to fundraise. Would love to see the pitch deck (email austin@manifund.org)!
I'm also interested in contributing $5k-$10k of my own regrantor budget; my tentative proposal is that we could send half of our total funding as an unrestricted grant, and the other half as a purchase of advertising time.
Austin Chen
over 1 year ago
Hi Damaris, my best guess is that your application isn't a good fit for Manifund; it's very unclear to me how big data analytics skills are useful for eg AI Safety, or why this skills gap is important to address. Best of luck!
Austin Chen
over 1 year ago
Hi Eden! My best guess is that your project is not a great fit for the Manifund platform; it's very unclear why we should provide charitable funding for your team to acquire a patent (and the requirement for an NDA doesn't help. If you're interested in making your application stronger, I would suggest that you drop your focus on acquiring a patent and just directly move to creating your prototype, and come back when you have a prototype to demo. (That isn't to say that I could promise that the prototype would receive funding, but in any case it would be much more compelling -- see Neuronpedia for an example grant that shipped a prototype before applying)
Austin Chen
over 1 year ago
Approving this grant! The Residency looks like an interesting project; this grant falls within our charitable mission of supporting overlooked opportunities, while not having any notable downside risks.
Austin Chen
over 1 year ago
Hi Lucy! Approving this grant as it fits within our charitable mission and doesn't seem likely to cause any negative effects.
It does look like you have a lot more room for funding; I'm not sure if any of our AI-safety focused regrantors have yet taken the time to evaluate your grant, but if you have a specific regrantor in mind, let me know and I will try to flag them!
Austin Chen
over 1 year ago
Hi Ben, appreciate the application and I'm personally interested in the XST approach here. I have a deep question about whether "founder in residence" as a strategy works at all. I have met a few such "FIR" individuals (usually attached to VC firms), but I'm not aware of any breakout startups in tech that have been incubated this way; they always seem to have been founder-initiated. Some more evidence is that the YC batch where founders applied without ideas seemed to go badly. From Sam Altman:
YC once tried an experiment of funding seemingly good founders with no ideas. I think every company in this no-idea track failed. It turns out that good founders have lots of ideas about everything, so if you want to be a founder and can’t get an idea for a company, you should probably work on getting good at idea generation first.
Of course it's plausible that longtermist startups thrive on different models of incubation than tech ones. Charity Entrepreneurship seems to do fine by finding the individuals first and then giving them ideas to work with?
Also, do you have examples of individuals you'd be excited to bring on for the FIR role? (Ideally people who actually would actually accept if you made them the offer today, but failing that examples of good candidates would be helpful!)
Austin Chen
over 1 year ago
Hi Keith! As a heads up, I don't think your project looks like a good fit for any of the regrantors on our platform (we are primarily interested in AI safety or other longtermist causes), so I think it's fairly unlikely you'll receive funding at this time. Cheers~
Austin Chen
over 1 year ago
(@joel_bkr I really appreciate your investigation into this, which my own thoughts echo, and am matching with $2500!)
Austin Chen
over 1 year ago
Hi Jordan, thanks for posting this application. I'm impressed with the traction ChinaTalk has garnered to date, and think better US-China media could be quite valuable. It seems like Joel has much more context on this proposal and I'm happy to defer to his assessments.
I wanted to chime in with a slightly weird proposal: instead of a grant, could we structure this as a sponsorship or purchase of some kind? Eg:
We could purchase ad slots, either to promote relevant EA ideas & opportunities, or to fundraise for Manifund itself
We could buy a large fixed lot of Substack subscriptions to gift to others
There's some precedent for this kind of funder-grantee interaction -- I believe CEA funded TypeIII Audio by buying up a certain amount of audio content generated for the EA Forum and LessWrong.
Austin Chen
over 1 year ago
Hi Alex! You seem like a smart and motivated individual, and I appreciate you taking the time to apply on Manifund. Despite this, I'm not super excited by this specific proposal; here are some key skepticisms to funding this out of my personal regrantor budget:
I'm suspicious of funding more "research into the right thing to do". I would be more excited to directly fund "doing the right thing" -- in this case, directly convincing university admins to fund AI or bio safety efforts.
As a cause area, I view IIDM a bit like crypto (bear with me): many promising ideas, but execution to date has been quite lackluster. Which is also to say, execution seems to be the bottleneck and I'm more excited to see people actually steering institutions well rather than coming up with more ideas on how to do so. As they say in startup-land, "execution is everything".
My guess is that as a university student, your world has mostly consisted of university institutions, leading you to overvalue their impact at large (compared to other orgs like corporations/startups, governments, and nonprofits). I would be much more excited to see proposals from you to do things outside the university orbit.
I would also guess that getting university admins on board will be quite difficult?
Thanks again for your application!
Austin Chen
over 1 year ago
@MSaksena Thanks for the explanation! I understand that nonprofit funders have their hands tied in a variety of ways and appreciate you outlining why it's in Manifund's comparative advantage to fund this as an independent grant.
Someday down the line, I'd love to chat with the Convergent Research team or related funders (like Schmidt Ventures?) about solving the problem of how to "flexibly commit money to adjacent projects". In the meantime, best of luck with your research and thank you for your service!
Austin Chen
over 1 year ago
Approving this! Excited for Manifund's role here in accelerating the concrete research towards mitigating global catastrophic biorisks.
Austin Chen
over 1 year ago
Hi Miti! In general I'm excited for biosecurity work on these topics and excited that Gavriel likes this grant, and expect to approve this. I just wanted to check in on a (maybe dumb) question: given that Convergent Research seems to be both well-funded and also the primary beneficiaries of Miti's work, why aren't they able to fund this themselves?
From CR's website, they don't have a vast pool of funding themselves, and instead seek to incubate FROs that then get follow-on funding. This seems reasonable; I'd be happy to work out other financial arrangements that make sense here such as via a loan or equity.
For example, Ales estimates this work to raise the chance of unlocking funding by 10%+. In that case, assuming a conservative $10m raise for the FRO, that would make Miti's project worth $1m; and assuming a funder's credit portion of 10% for this, that would indicate a $100k value of the grant made. So eg would Ales/CR/the resulting FRO be willing to commit $100k back to Gavriel's regrantor budget, conditional on the FRO successfully raising money?
I apologize if this seems like a money-grubbing ask; I'm coming into this a bit from a "fairness between funders" perspective and a bit of "sanity checking that the work really is as valuable to CR as purported". Manifund just doesn't have that much money at the moment, so being able to extend our capital is important; and also, I'm excited about using good financial mechanisms to make charitable fundraising much much better (ask me about impact certs sometime!).
Austin Chen
over 1 year ago
Approving as this project is within our scope and doesn't seem likely to cause harm. I appreciate Kabir's energy and will be curious to see what the retrospective on the event shows!
Austin Chen
over 1 year ago
I'm not familiar with Alexander or his work, but the votes of confidence from Anton, Quinn, and Greg are heartening.
Approving as the project seems within scope for Manifund (on longtermist research) and not likely to cause harm.
Austin Chen
over 1 year ago
Hi Johnny, thanks for submitting your project! I've decided to fund this project with $2500 of my own regrantor budget to start, as a retroactive grant. The reasons I am excited for this project:
Foremost, Neuropedia is just a really well-developed website; web apps are one of the areas I'm most confident in my evaluation. Neuropedia is polished, with delightful animations and a pretty good UX for expressing a complicated idea.
I like that Johnny went ahead and built a fully functional demo before asking about funding. My $2500 is intended to be a retroactive grant, though note this is still much less than the market cost of 3-4 weeks of software engineering at the quality of Neuropedia, which I'd ballpark at $10k-$20k.
Johnny looks to be a fantastic technologist with a long track record of shipping useful apps; I'd love it if Johnny specifically and others like him worked on software projects with the goal of helping AI go well.
The idea itself is intriguing. I don't have a strong sense of whether the game is fun enough to go viral on its own (my very rough guess is that there are some onboarding simplifications and virality improvements), and an even weaker sense of whether this will ultimately be useful for technical AI safety. (I'd love if one of our TAIS regrantors would like to chime in on this front!)
Austin Chen
over 1 year ago
Hi Vincent! Thanks for submitting this; I'm excited about the concept of loans in the EA grantmaking space, and appreciate that your finances are published transparently.
I expect to have a list of follow-up questions soon; in the meantime, you might enjoy speaking with the folks at Give Industries, who employ a similar profit-for-good model!
Austin Chen
over 1 year ago
As Manifund is a relatively new funder, I’d been thinking through examples of impactful work that we’d like to highlight, and VaccinateCA came to mind. I initially reached out and made the offer to Patrick, upon hearing that he had donated $100k of his own money into the nonprofit. Patrick nominated Karl to receive this grant instead, and introduced us; Karl and I met for a video call in early July.
What’s special about this grant to Karl is that it’s retroactive — a payment for work already done. Typically, funders make grants prospectively to encourage new work in the future. I’m excited about paying out this retroactive grant for a few reasons:
I want to highlight VaccinateCA as an example of an extremely effective project, and tell others that Manifund is interested in funding projects like it. Elements of VaccinateCA that endear me to it, especially in contrast to typical EA projects:
They moved very, very quickly
They operated an object level intervention, instead of doing research or education
They used technology that could scale up to serve millions
But were also happy to manually call up pharmacies, driven by what worked well
Karl was counterfactually responsible for founding VaccinateCA, and dedicated hundreds of hours of his time and energy to the effort, yet received little to no compensation.
I’d like to make retroactive grants more of a norm among charitable funders. It’s much easier to judge “what was successful” compared to “what will succeed”, especially for public goods; a robust ecosystem of retroactive grants could allow for impact certs to thrive.
I offered $10k as it felt large enough to meaningfully recognize the impact of VaccinateCA, while not taking up too much of my regrantor budget. I do think the total impact of this was much larger; possibly valued in the hundreds of millions of dollars to the US government, if you accept the statistical value of a life at $1-10m. (It’s unclear to me how large retroactive grants ought to to incentivize good work, and I’d welcome further discussion on this point.) I've set the project to make room for up to $20k of total funding for this, in case others would like to donate as well.
Q: Are you familiar with the EA movement? If so, what are your thoughts?
A: Yeah, I’ve heard a lot about it. To use the lingo, I’ve been “Lesswrong-adjacent for a while”. Taken to extremes, EA can get you to do crazy things — as all philosophies do. But I really like the approach; mosquito nets make sense to me.
I’d observe that a lot of money is out there, looking for productive uses. Probably the constraining factor is productive uses. Maybe you [Manifund] are solving this on a meta level by encouraging productive uses of capital? Austin: we hope so!
Q: What is Karl up to now?
A: I left my last role at Rippling a few months ago, and am now working on my own startup.
It’s still pretty early, and I’m not yet settled on an idea, but I’m thinking of things related to my work on global payrolls at Rippling. I expect more business will be done cross-border, and using instant payments. Today, putting in a wire is very stressful, and this will be true of more and more things.
My idea is to reduce payment errors: money disappearing when payments go to a false account, or an account that is some other random person’s. This will hopefully reduce payments friction, making international business less scary. The goal is to decreases costs, make it easier to hire people, and cut down on fraud.
Thanks to Lily J and Rachel W for feedback on this writeup.
Austin Chen
over 1 year ago
Hi Vikram, thanks for applying for a grant! The projects you're working on (especially LimbX) look super cool. I'm offering $500 for now to get this proposal past its minimum funding bar; some notes as we consider whether to fund it more:
This kind of deep tech is a bit outside of our standard funding hypothesis (which tends to be more longtermist/EA), and also outside my personal area of expertise (software)
I would be excited about Manifund supporting young, talented individuals (similar to Emergent Ventures); but it's possible this represents a dilution in our focus? My grant to Sophia was similar in size/thesis, but in that case I was personally familiar with Sophia.
I'm also just curious: how did you find out about Manifund?
Austin Chen
over 1 year ago
Thanks for posting this application! I've heard almost universal praise for Apollo, with multiple regrantors expressing strong enthusiasm. I think it's overdetermined that we'll end up funding this, and it's a bit of a question of "how much?"
I'm going to play devil's advocate for a bit here, listing reasons I could imagine our regrantors deciding not to fund this to the full ask:
I expect Apollo to have received a lot of funding already and to soon receive further funding from other sources, given widespread enthusiasm and competent fundraising operations. In particular, I would expect Lightspeed/SFF to fund them as well. (@apollo, I'd love to know if you could publicly list at least the total amount raised to date, and any donors who'd be open to being listed; we're big believers in financial transparency at Manifold/Manifund)
The comparative advantage of Manifund regranting (among the wider EA funding ecosystem) might lie in smaller dollar grants, to individuals and newly funded orgs. Perhaps regrantors should aim to be the "first check in" or "pre-seed funding" for many projects?
I don't know if Apollo can productively spend all that money; it can be hard to find good people to hire, harder yet to manage them all well? (Though this is a heuristic from tech startup land, I'm less sure if it's true for research labs).
Austin Chen
over 1 year ago
Funding this as:
I've previously had the opportunity of cohosting an EA hackathon with Sophia following EAG Bay Area; she was conscientious and organized, and I'd happily cohost something again
I'm personally excited about supporting more concrete software development within the EA sphere, on the margin (compared to eg research papers)
The ask is quite low ($500), and the project promises to be both fast (lasting a week) and soon (by Jul 27); I really like the ethos of moving quickly on a small budget.
I don't have specific insights into Solar4Africa, but I'm curious to see the results!
Austin Chen
over 1 year ago
Hi Haven, thanks for submitting your application! I like that you have an extensive track record in the advocacy and policy space and am excited about you bringing that towards making AI go well.
I tentatively think that funding your salary to set up this org would be fairly similar to funding attempts to influence legislation (though I would be happy to hear if anyone thinks this isn't the case, based on what the IRS code states about 501c3s). That doesn't make it a non-starter for us to fund, but we would scrutinize this grant a lot more, especially as we'd have about a ~$250k cap across all legislative activities given our ~$2m budget (see https://ballotpedia.org/501(c)(3))
Some questions:
Where do you see this new org sitting in the space of existing AI Gov orgs? Why do you prefer starting a new org over joining an existing one, or working independently without establishing an org at all?
Have you spoken with Holly Elmore? Given the overlap in your proposals, a conversation (or collaboration?) could be quite fruitful.
Austin Chen
over 1 year ago
Hi Jeffrey! I do think EA suffers from a lack of inspiring art and good artists, and appreciate that you are trying to fix this. Do you happen to have any photos or links to the pieces that you intend to put on display?
Austin Chen
over 1 year ago
Hi Bruce! I'm a fan of software projects and modeling, and appreciate the modest funding ask. I'm not going to be funding this at this time, but hope you continue to make progress and would love to see what your demo/video looks like when it's ready!
One note on your application, it does use a lot of jargon which makes it harder to understand what you're going to do, reminding me of this passage from Scott:
Another person’s application sounded like a Dilbert gag about meaningless corporate babble. “We will leverage synergies to revolutionize the paradigm of communication for justice” - paragraphs and paragraphs of this without the slightest explanation of what they would actually do. Everyone involved had PhDs, and they’d gotten millions of dollars from a government agency, so maybe I’m the one who’s wrong here, but I read it to some friends deadpan, it made them laugh hysterically, and sometimes they still quote it back at me - “are you sure we shouldn’t be leveraging synergies to revolutionize our paradigm first?” - and I laugh hysterically.
I think concrete examples (or the demo/video you mentioned) would help!
Austin Chen
over 1 year ago
Hey Allison, thanks for submitting this! Upvoting because this looks like a thoughtful proposal and I'm interested in hearing about how the August workshop goes.
I would guess that a $75k minimum funding goal is higher than our regrantors would go for, given that most of our large-dollar regrantors are primarily focused on AI Safety, but I'm curious to hear what our bio or policy regrantors have to say about this kind of project!
Austin Chen
over 1 year ago
Putting down $20k of my regrantor budget for now (though as mentioned, we'll likely structure this as a SAFE investment instead of a grant, once we've finished getting commitments from regrantors)
Austin Chen
over 1 year ago
Thanks for submitting this, Aaron! We really like this kind of concrete object-level proposal, which is ambitious yet starts off affordable, and you have quite the track record on a variety of projects. A few questions:
As this is a project for Lantern Bioworks, would you be open to receiving this as an investment (eg a SAFE) instead of grant funding?
If funded, what do you think your chances of success are, and where are you most likely to fail? (I've set up a Manifold Market asking this question)
Could you link to your Lightspeed application as well?
Conflict of interest note, Aaron was an angel investor in Manifold Market's seed round.
Austin Chen
over 1 year ago
Wanted to call out that Holly has launched a GoFundMe to fund her work independently; it's this kind of entrepreneurial spirit that gives me confidence she'll do well as a movement organizer!
Check it out here: https://www.gofundme.com/f/pause-artificial-general-intelligence
Austin Chen
over 1 year ago
I'm excited by this application! I've spoken once with Holly before (I reached out when she signed up for Manifold, about a year ago) and thoroughly enjoy her writing. You can see that her track record within EA is stellar.
My hesitations in immediately funding this out of my own regrantor budget:
Is moratorium good or bad? I don't have a strong inside view and am mostly excited by Holly's own track record. I notice not many other funders/core EAs excited for moratorium so far (but this argument might prove too much)
Should Holly should pursue this independently, or as part of some other org? I assume she's already considered/discussed this with orgs who might employ her for this work such as FLI or CAIS?
I would be even more excited if Holly found a strong cofounder; though this is my bias from tech startups (where founding teams are strongly preferred over individual founders), and I don't know if this heuristic works as well for starting movements.
Austin Chen
over 1 year ago
Hi Kabir! Unfortunately, I'm pretty skeptical that https://ai-plans.com/ is going to be much used and would not fund this out of my regrantor budget.
This kind of meta/coordination site is very hard to pull off, as it suffers from network effect problems (cf the cold start problem). Without established connections or a track record of successful projects, even if the idea is good (which I'm not judging), the project itself won't hit critical mass. I might change my mind if you demonstrated substantial interest (hundreds of users, or a few very passionate users)
I appreciate that you've coded up your own website (I think?) Kabir, at this stage I would focus not on any specific EA project but rather just becoming a better software developer; apply for internships/jobs.
If you really want to do something "EA/AI Safety-ish" (though I don't think this would be a good rationale), consider just writing criticisms for individual plans and posting them on the EA Forum.
Austin Chen
over 1 year ago
Thanks for the writeup, Adam! I like that the grant rationale is understandable even for myself (with little background in the field of alignment), and that you've pulled out comparison points for this salary ask.
I generally would advocate for independently conducted research to receive lower compensation than at alignment organizations, as I usually expect people to be significantly more productive in an organization where they can receive mentorship (and many of these organizations are at least partially funding constrained).
I share the instinct that "working as an independent researcher is worse than in an org/team", but hadn't connected that to "and thus funders should set higher salaries for at orgs", so thanks for mentioning.
Tangent: I hope one side effect of our public grant process is that "how much salary should I ask for in my application" becomes easier for grantees. (I would love to establish something like Levels.fyi for alignment work.)
Austin Chen
over 1 year ago
Haha yeah, I was working on my writeup:
I generally think it's good that David's work exists to keep EA/longtermist causes honest, even though I have many disagreements with it
For example, I agree a lot with his discussion on peer review in EA publications, while disagreeing with his criticism of Wytham Abbey.
I would especially be interested in hearing what David thinks about our regranting program!
I especially like that David is generally thoughtful and responsive to feedback eg on EA Forum and article comments.
In the grand scheme of things, $2k seemed like a very small cost to cover 2 years' worth of future blogging.
On reflection, I might have been too hasty to grant the largest amount, perhaps due to mentally benchmarking against larger grants I've been looking at. At this point in time I might downsize it to $1k if there were a convenient way to do that (and we decided changing the grant). But probably not worth it here given the small sums, except as a potential data point for the future.
Austin Chen
over 1 year ago
Thanks for the writeup, Rachel W -- I think paying researchers in academia so they are compensated more closely to industry averages is good. (It would have been helpful to have a topline comparison, eg "Berkeley PHDs make $50k/year, whereas comparable tech interns make $120k/year and fulltime make $200k/year")
I really appreciate Rachel Freedman's willingness to share her income and expenses. Talking about salary and medical costs is always a bit taboo; it's brave of her to publish these so that other AI safety researchers can learn what the field pays.
Other comments:
We'd love to have other regrantors (or other donors!) help fill the remainder of Rachel Freedman's request; there's currently still a $21k shortfall from her total ask.
Rachel W originally found this opportunity through the Nonlinear Network; kudos to the Nonlinear folks!
Austin Chen
over 1 year ago
This grant is primarily a bet on Gabriel, based on his previous track record and his communication demonstrated in a 20min call (notes)
Started Stanford AI Alignment; previous recipient of OpenPhil fieldbuilding grant
His proposal received multiple upvotes from screeners on the Nonlinear Network
I also appreciated the display of projects on his personal website; I vibe with students who hack on lots of personal side projects, and the specific projects seem reasonably impressive at a glance
EA aligned
I don't feel particularly well qualified to judge the specifics of the proposed experiment myself, and am trusting that he and his colleagues will do a good job reporting the results
Gabe requested $5000 for this project, but as he's planning to apply to several other sources of funding (and other Nonlinear Network grantmakers have not yet reached out), filling half of that with my regrantor budget seemed reasonable.
None
Austin Chen
over 1 year ago
I saw from your EA Forum post (https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets) that you were looking for grants to work on this. As it happens, we're working on a regranting program through Manifund, and I might be interested in providing some funding for your work!
A few questions I had:
- How much time do you plan on investing on Base Rate Times over the next few months?
- What has traffic looked like (eg daily pageviews over the last month or so?)
- How do you get qualitative feedback from people who view your site?
Also happy to find time to chat: https://calendly.com/austinchen/manifold
Austin Chen
over 1 year ago
@DamienLaird: Thanks for the update! I'm sorry to hear that you won't be continuing to write, as I've enjoyed your blogging these last few months. As I've conveyed via email, I appreciate the refund offer but think you should keep the investment, as you've already dedicated significant time towards what I consider to be good work.
Best wishes with your next projects!
Austin Chen
over 1 year ago
Hey! I think it's cool that you've already built and shipped this once already -- I'd love to see more prediction sites flourishing! I appreciate that you provided an image of the site too; it looks pretty polished, and the image really helps us understand how the site would function.
Given that the site is already mostly built, it seems like your hardest challenge will be finding users who are excited to participate -- especially if you're targeting the Bulgarian audience, as forecasting is already something of a niche, so Bulgarian forecasting would seem to be a niche within a niche. To that end, I'd definitely recommend conducting user interviews with people who you think might be a good fit (I found the books "The Mom Test" and "Talking to Humans" to really help me get comfortable with user interviews).
A couple questions:
What kind of feedback did your first set of users have on the site.
What do you plan on doing differently this time around to try and get more usage?
Austin Chen
over 1 year ago
Hi Devansh, I very much think the problem of retroactive impact evaluation is quite difficult and am excited to see people try and tackle the area! It's nice to see that you've already lined up three nonprofits (from your local area?) to assess.
My questions:
Have you already spoken with these nonprofits about assessing their impact? If so, what have their responses been like?
Have you identified the evaluators who will be doing the work of impact assessment? If so, what are their backgrounds like?
Austin Chen
over 1 year ago
Hi Jesus! A Google Sheets add-on for Manifold is definitely not something we'd ever considered before; thanks for suggesting it! I think a lot of professionals spend their time in Google Sheets, and making it easier to access forecasts or use forecasting results in their formulas seems potentially very useful.
Some questions I had:
(As Ernest asked) how specifically would it work? Do you have a mockup or something that would demonstrate it's functionality?
Is there a simpler version of this you could make that would be useful (eg a template Google Sheet with formulas that read from Manifold's API, instead of an add on?)
Who do you think would be using this add on, besides yourself? Have you spoken with them about their use cases?
Austin Chen
over 1 year ago
Hi Ryan, I really love the innovative way you've chosen to use Manifund (as a bidding mechanism between three different projects to allocate ad spend!) And naturally, we're super interested in guidelines to help inform future impact market rounds.
A couple of questions for you:
How did you settle on these three areas (college students, earthquakes, and hurricane forecasts?)
For a project with $500 to spend on ads, how many people would you expect to reach?
Austin Chen
over 1 year ago
Hi Samuel, it's cool to see your commitment to making forecasting fun -- a big part of what I think has made Manifold succeed is an emphasis on ease of use and levity~
A couple questions:
What does your ideal participant look like? Can you point to a few examples of people who are already excited to participate in this?
What kind of impact are you hoping to have, as a result of running these fun events?
Austin Chen
over 1 year ago
Hey Joshua! I've always believed that the comments on Manifold were super helpful in helping forecasters improve their accuracy -- it seemed so obvious so as to not even need testing in an RCT, haha. It's cool to see the amount of rigor you're committing to this idea, though!
Some questions for you:
Based on the different possible outcomes of your experiment, what different recommendations would your project generate for prediction platforms? Eg if you find that comments actually reduced forecasting accuracy somehow, would the conclusion be that Manifold should turn off comments?
What specific forecasting platform would you use (is it one that you'd build/have already built?)
How many participants do you expect to attract with the $10k prize pool? How would you recruit these participants?
Austin Chen
over 1 year ago
Hey Valentin! Always happy to see new proposals for ways to incorporate Manifold where different users spend their time. I'm not a user of Telegram myself, but I know a lot of folks worldwide are!
I'm curious:
How many users (either total or monthly) have your popular Telegram bots received? How many usages?
What kind of Telegram channels or group chats do you expect to make use of the bot? What kind of questions would they ask?
Austin Chen
over 1 year ago
Hey David, thanks for this proposal -- I loved the in-depth explainer, and the fact experiment setup allows us to learn about the results of long-term predictions but on a very short timeframe.
Some questions:
Am I correct in understanding that you're already running this exact experiment, just with non-superforecasters instead of superforecasters? If so, what was the reasoning for starting with them over superforecasters in the first place?
How easily do you expect to be able to recruit 30 superforecasters to participate? If you end up running this experiment with less (either due to funding or recruiting constraints), how valid would the results be?
Austin Chen
over 1 year ago
Hey William, I'm always excited to see cool uses of the Manifold API -- and Kelly bet sizing is an idea we've kicked around before. Awesome to see that it's a project you already have in progress! As you might know, Manifold is open source (we just added a limit order depth chart courtesy of Roman) and we're open to new contributions; though probably to start, a standalone app is a better way of testing out the user interface. And feel free to hop in our #dev-and-api channel on Discord with questions~
Some questions for you:
What tech stack are you building this in?
One concern I've always had with Kelly is that it doesn't seem to incorporate degree of certainty, making it seem hard to use in real contexts -- e.g. if two equally liquid markets are both at 20% and I think they should both be 50%, Kelly recommends the same course of action even if one is "Will this coin come up heads" and the other is "Will the president be republican in 2025". Does this seem true/like an issue to you?
Austin Chen
over 1 year ago
Hi Damien, it's cool that you've already been putting significant time into writing up and publishing these posts already; I've just subscribed to your substack! You should consider cross-posting your articles to the EA Forum for increased visibility ;)
A couple questions that might help investors thinking about investing:
What kind of feedback have you gotten on your blog posts so far?
Where do you see your blog adding value, compared to other sources of info on GCRs?
Austin Chen
over 1 year ago
Hi Hugo, I really appreciate that you're trying to bring forecasting to a wider audience via translations (I used to scanlate manga from Japanese to English, haha). A couple questions for you:
Can you give a few examples of forecasting content that you'd intend on translating into Portuguese, and an estimate of how many such pieces you would translate using your funding?
How would you plan on driving traffic or interest to your new website?
Austin Chen
over 1 year ago
Hi Sheikh! This seems like a neat project - it's awesome to hear that Nuno is involved here too. A couple questions that might help investors evaluating this:
What are the deliverables if experimentation goes well -- eg published paper? Blog post? Interactive website?
Roughly how much time do you and Nuno expect to put into this before deciding whether to scale up?
Austin Chen
over 1 year ago
For the record, capturing a discussion on Discord: This proposal was submitted late to the ACX Minigrants round, and normally would not be included in the round.
That said, in light of 1) the topicality of the proposal, 2) Ezra's past track record, and 3) desire to be impartial in supporting competitors to Manifold, I'm leaning towards allowing this proposal to receive angel and retro funding.
Let me know if there are any objections!
For | Date | Type | Amount |
---|---|---|---|
Year one of AI Safety Tokyo | 8 months ago | user to user trade | 545 |
Run a public online Turing Test with a variety of models and prompts | 8 months ago | user to user trade | 250 |
<b56cc7d7-202e-4f21-a705-91cbbf7cc620> | 8 months ago | tip | 1 |
Making 52 AI Alignment Video Explainers and Podcasts | 9 months ago | project donation | 500 |
EEG using a generalizable ML model + 32 channel PCB | 9 months ago | project donation | 2500 |
Experiments to test EA / longtermist framings and branding | 11 months ago | project donation | 5000 |
<6a7b8e55-d580-40fc-b357-a713f428c9b2> | 11 months ago | profile donation | 10000 |
Manifund Bank | 11 months ago | mana deposit | +100000 |
AI Safety Serbia Hub - Office Space for (Frugal) AI Safety Researchers | 11 months ago | project donation | 1100 |
London Manifold.love dating shows | 11 months ago | user to user trade | 200 |
Manifold x College Admissions | 12 months ago | user to user trade | 100 |
Manifund Bank | 12 months ago | mana deposit | +10000 |
Mapping neuroscience and mechanistic interpretability | 12 months ago | project donation | 1200 |
Mirrorbot | 12 months ago | user to user trade | 50 |
Manufacture Manyfold Manifolders in the Maritime Metropolis | 12 months ago | user to user trade | 100 |
Estimating annual burden of airborne disease (last mile to MVP) | 12 months ago | project donation | 3600 |
Manifold merch store | 12 months ago | user to user trade | 20 |
Manifund Bank | 12 months ago | mana deposit | +5 |
Manifund Bank | about 1 year ago | mana deposit | +10 |
Manifund Bank | about 1 year ago | mana deposit | +1 |
Manifund Bank | about 1 year ago | deposit | +4 |
Manifund Bank | about 1 year ago | deposit | +10 |
Invest in the Conflux Manifold Media Empire(??) | about 1 year ago | user to user trade | 15 |
Manifund Bank | about 1 year ago | deposit | +1 |
A tool for making well sized (~Kelly optimal) bets on manifold | about 1 year ago | user to user trade | +0 |
Forecast Dissemination Mini-Market 2 of 3: Hurricane Hazards | about 1 year ago | user to user trade | +100 |
Blog about Forecasting Global Catastrophic Risks | about 1 year ago | user to user trade | +225 |
A tool for making well sized (~Kelly optimal) bets on manifold | about 1 year ago | user to user trade | +105 |
Telegram bot for Manifold Markets | about 1 year ago | user to user trade | +20 |
Manifold Markets Add-on for Google Sheets | about 1 year ago | user to user trade | +151 |
Manifold feature to improve non-resolving popularity markets | about 1 year ago | user to user trade | +219 |
<8c5d3152-ffd8-4d0e-b447-95a31f51f9d3> | about 1 year ago | profile donation | +100 |
Artificial General Intelligence (AGI) timelines ignore the social factor at their peril | about 1 year ago | user to user trade | 100 |
Holly Elmore organizing people for a frontier AI moratorium | over 1 year ago | project donation | 2500 |
One semester living expenses for MIT/Harvard-based researcher | over 1 year ago | project donation | 500 |
Neuronpedia - Open Interpretability Platform | over 1 year ago | project donation | 2500 |
Manifund Bank | over 1 year ago | withdraw | 10 |
VaccinateCA | over 1 year ago | project donation | 10000 |
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor | over 1 year ago | project donation | 20000 |
Funding for Solar4Africa app development | over 1 year ago | project donation | 500 |
<3bd68c4f-0fcc-4840-aaff-c8d6dd95b88e> | over 1 year ago | profile donation | +200 |
Reflective altruism | over 1 year ago | project donation | 2000 |
Manifund Bank | over 1 year ago | deposit | +50000 |
Manifund Bank | over 1 year ago | withdraw | 100 |
Make large-scale analysis of Python code several orders of magnitude quicker | over 1 year ago | user to user trade | 900 |
Make large-scale analysis of Python code several orders of magnitude quicker | over 1 year ago | user to user trade | 100 |
Forecast Dissemination Mini-Market 2 of 3: Hurricane Hazards | over 1 year ago | user to user trade | 250 |
Blog about Forecasting Global Catastrophic Risks | over 1 year ago | user to user trade | 499 |
A tool for making well sized (~Kelly optimal) bets on manifold | over 1 year ago | user to user trade | 80 |
Telegram bot for Manifold Markets | over 1 year ago | user to user trade | 90 |
Manifold Markets Add-on for Google Sheets | over 1 year ago | user to user trade | 381 |
Manifold Markets Add-on for Google Sheets | over 1 year ago | user to user trade | 101 |
Manifold feature to improve non-resolving popularity markets | over 1 year ago | user to user trade | 365 |
Manifund Bank | over 1 year ago | deposit | +1000 |
Manifund Bank | over 1 year ago | deposit | +2000 |
Manifund Bank | over 1 year ago | deposit | +1000 |