My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.
Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.
I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.
While there are different value functions, I believe there is a best possible value function.
This may exist at the level of physics, something to do with qualia that we don’t understand perhaps, and I think it would be useful to have an information theory of consciousness which I have been thinking about.
But ultimately, I believe that in theory, even if it’s not at the level of physics, I think you can postulate a meta-social choice theory which evaluates every possible social choice theory under all possible circumstance for every possible mind or value function, and find some sort of game theoretic equilibrium which all value functions and social choice theories for evaluating those functions and meta-social choice theories for deciding between choice theories converge on as the universal best possible set of moral principles—which I think is fundamentally about axiology; what moral choice in any given situation creates the most value across the greatest number of minds/entities/value functions/moral theories? I believe this question has an objective answer, there is actually a best thing to do, good things to do, and bad things to do, even if we don’t know what these are. Moral progress is possible, real, not a meaningless concept.
I very much agree that we need less deference and more people thinking for themselves, especially on cause prioritization. I think this is especially important for people who have high talent/skill in this direction, as I think it can be quite hard to do well.
It’s a huge problem that the current system is not great at valuing and incentivizing this type of work, as I think this causes a lot of the potentially highly competent cause prioritization people to go in other directions. I’ve been a huge advocate for this for a long time.
I think it is somewhat hard to systematically address, but I’m really glad you are pointing this out and inviting collaboration on your work, I do think concentration of power is extremely neglected and one of the things that most determines how well the future will go (and not just in terms of extinction risk but upside/opportunity cost risks as well.)
Hey Trevor, it’s been a while, I just read Kuhan’s quick take which referred to this quick take, great to see you’re still active!
This is very interesting, I’ve been evaluating a cause area I think is very important and potentially urgent—something like the broader class of interventions of which “the long reflection” and “coherent extrapolated volition” are examples, essentially how do we make sure the future is as good as possible conditional on aligned advanced AI.
Anyways, I found it much easier to combine tractability and neglectedness into what I called “marginal tractability,” meaning how easy is it to increase success of a given cause area by, say, 1% at the current margin.
I feel like trying to abstractly estimate tractability independent of neglectedness was very awkward, and not scalable; i.e. tractability can change quite unpredictably over time, so it isn’t really a constant factor, but something you need to keep reevaluating as conditions change over time.
Asking the tractability question “If we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” isn’t a bad trick, but in a cause area that is extremely neglected this is really hard to do because there are so few existing interventions, especially measurable ones. In this case investigating some of the best potential interventions is really helpful.
I think you’re right that the same applies when investigating specific interventions. Neglectedness is still a factor, but it’s not separable from tractability; marginal tractability is what matters, and that’s easiest to investigate by actually looking at the interventions to see how effective they are at the current margin.
I feel like there’s a huge amount of nuance here, and some of the above comments were good critiques…
But for now gotta continue on the research. The investigation is at about 30,000 words, need to finish, lightly edit, and write some shorter explainer versions, would love to get your feedback when it’s ready!
You forgot to mention that Anthropic’s name literally means “as seen from a narrowly human point of view”, far cry from moral circle expansion or doing the most good possible
Thanks Tyler! I think this is spot on. I am nearing the end of writing a very long report on this type of work so I don’t have time at the moment to write a more detailed reply (and what I’m writing is attempting to answer these questions). One thing that really caught my eye was when you mentioned:
Populating and refining a list of answers to this last question has been a lot of the key work of the field over the past few years.
I am deeply interested in this field, but not actually sure what is meant by “the field.” Could you point me to what search terms to use and perhaps the primary authors or research organizations who have published work on this type of thing?”
Will MacAskill stated in a recent 80,000 hours podcast that he believes marginal work on trajectory change toward a best possible future rather than a mediocre future seems likely significantly more valuable than marginal work on extinction risk.
Could you explain what the key crucial considerations are for this claim to be true, and a basic argument for why think each of the crucial considerations resolves in favor of this claim?
Would also love to hear if others have any other crucial considerations they think weigh in one direction or the other.
Yes… So basically what you’re saying is this argument goes through if you make the summation of all bubble universes at any individual time step, but longtermist arguments would go through if you take a view from outside the metaverse and make the summation across all points of time in all bubble universes simultaneously?
I guess my main issue is that I’m having trouble philosophically or physically stomaching this, it seems to touch on a very difficult ontological/metaphysical/epistemological question of whether or not it is coherent to do the summation of all points in space-time across infinite time as though all of the infinite future already “preexists” in some sense. On the other hand, it could be the case that taking such an “outside view” of infinite space-time as though calculations could be “all at once” may not be an acceptable operation to perform, as such a calculation could not in reality ever actually be made by any observer, or at least could not be made at any given time
I have a very strong intuition that infinity itself is incoherent and unreal and therefore something like eternal inflation is not actually likely to be correct or may be physically possible. However, I am certainly not an expert in this and my feelings about the topic not necessarily correct; yet my sense is these sorts of questions are not fully worked out.
Part of what makes this challenging for me is that the numbers are so much ridiculously bigger than the numbers in longtermist calculations, that it would seem that even a very, very small chance that it might be correct would make me think it should get somewhat deeper consideration, at least have some specialists who work on these kinds of topics weigh in on how likely it seems something like this could be correct.
Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.
Yes, I think there is a constant time factor. It is all one unified, single space-time, as I understand it (although this also isn’t an area of very high expertise for me,) I think that what causally separates the universes is simply that space is expanding so fast that the universe is are separated by incredible amount of space and don’t have any possibility of colliding again until much, much later in the universe’ s time-lines.
Yes, I believe this is correct. I am pretty uncertain about this.
A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.
Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.
Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!
And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away
Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.
I am very encouraged (although, perhaps anthropically I should be discouraged for not having been the first one to discover it) that I am not the only one who thought of this (also, see here.)
I was thinking about running this idea by some physicists and philosophers to get further feedback on whether it is sound. It does seem like adding at least a small element of this to a moral parliament might not be a bad idea, especially considering that making it only 1% of the moral parliament would capture the vast majority of value in terms of orders of magnitude (indeed, if at any given moment a single person who is encountering this idea just tried to “live in the moment” or smile for a second at the moment they thought of it, and then everyone forgot about it forever, we would still capture the vast majority [again, in orders of magnitude terms] of the value of the idea; but this continues to be true in every proceeding moment.)
Anyways, thanks for posting this, I am hoping to come back to my post sometime soon and add some things to it and correct a few mistakes I think I made. Let me know if you’d like to be involved in any further investigation of this idea! By the way, here’s the version I wrote in case you are interested in checking it out.
Hi Hans, I found your post incredibly helpful and validating, and much clearer than my own in some ways. I especially like the idea of “living in the moment” as a way of thinking about how to maximize value, I actually think this is probably correct and makes the idea potentially more palatable and less conflicting with other moral systems than my own framing.
I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.
I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…
Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:
“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”
That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.
For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.
If I am mistaken in some way about this, though, please let me know!
On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement.
I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly.
It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction.
Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
I have been thinking about this kind of thing quite a lot and have several ideas I have been working on. Just to clarify, is it acceptable to have multiple entries, or are there any limit on this?
Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.
Jordan Arel
My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.
Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.
I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.
Wow, this is exciting. I agree this is one of the most important things we should be working on right now.
I very much agree that we need less deference and more people thinking for themselves, especially on cause prioritization. I think this is especially important for people who have high talent/skill in this direction, as I think it can be quite hard to do well.
It’s a huge problem that the current system is not great at valuing and incentivizing this type of work, as I think this causes a lot of the potentially highly competent cause prioritization people to go in other directions. I’ve been a huge advocate for this for a long time.
I think it is somewhat hard to systematically address, but I’m really glad you are pointing this out and inviting collaboration on your work, I do think concentration of power is extremely neglected and one of the things that most determines how well the future will go (and not just in terms of extinction risk but upside/opportunity cost risks as well.)
Going to send you a DM now!
Hey Trevor, it’s been a while, I just read Kuhan’s quick take which referred to this quick take, great to see you’re still active!
This is very interesting, I’ve been evaluating a cause area I think is very important and potentially urgent—something like the broader class of interventions of which “the long reflection” and “coherent extrapolated volition” are examples, essentially how do we make sure the future is as good as possible conditional on aligned advanced AI.
Anyways, I found it much easier to combine tractability and neglectedness into what I called “marginal tractability,” meaning how easy is it to increase success of a given cause area by, say, 1% at the current margin.
I feel like trying to abstractly estimate tractability independent of neglectedness was very awkward, and not scalable; i.e. tractability can change quite unpredictably over time, so it isn’t really a constant factor, but something you need to keep reevaluating as conditions change over time.
Asking the tractability question “If we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” isn’t a bad trick, but in a cause area that is extremely neglected this is really hard to do because there are so few existing interventions, especially measurable ones. In this case investigating some of the best potential interventions is really helpful.
I think you’re right that the same applies when investigating specific interventions. Neglectedness is still a factor, but it’s not separable from tractability; marginal tractability is what matters, and that’s easiest to investigate by actually looking at the interventions to see how effective they are at the current margin.
I feel like there’s a huge amount of nuance here, and some of the above comments were good critiques…
But for now gotta continue on the research. The investigation is at about 30,000 words, need to finish, lightly edit, and write some shorter explainer versions, would love to get your feedback when it’s ready!
You forgot to mention that Anthropic’s name literally means “as seen from a narrowly human point of view”, far cry from moral circle expansion or doing the most good possible
Thanks Tyler! I think this is spot on. I am nearing the end of writing a very long report on this type of work so I don’t have time at the moment to write a more detailed reply (and what I’m writing is attempting to answer these questions). One thing that really caught my eye was when you mentioned:
I am deeply interested in this field, but not actually sure what is meant by “the field.” Could you point me to what search terms to use and perhaps the primary authors or research organizations who have published work on this type of thing?”
@William_MacAskill, what are the main characteristics we should aim for “Viatopia” to have?
Will MacAskill stated in a recent 80,000 hours podcast that he believes marginal work on trajectory change toward a best possible future rather than a mediocre future seems likely significantly more valuable than marginal work on extinction risk.
Could you explain what the key crucial considerations are for this claim to be true, and a basic argument for why think each of the crucial considerations resolves in favor of this claim?
Would also love to hear if others have any other crucial considerations they think weigh in one direction or the other.
Great, thank you!
Hi, hate to bother you again, just wondering where things are at with this contest?
Yes… So basically what you’re saying is this argument goes through if you make the summation of all bubble universes at any individual time step, but longtermist arguments would go through if you take a view from outside the metaverse and make the summation across all points of time in all bubble universes simultaneously?
I guess my main issue is that I’m having trouble philosophically or physically stomaching this, it seems to touch on a very difficult ontological/metaphysical/epistemological question of whether or not it is coherent to do the summation of all points in space-time across infinite time as though all of the infinite future already “preexists” in some sense. On the other hand, it could be the case that taking such an “outside view” of infinite space-time as though calculations could be “all at once” may not be an acceptable operation to perform, as such a calculation could not in reality ever actually be made by any observer, or at least could not be made at any given time
I have a very strong intuition that infinity itself is incoherent and unreal and therefore something like eternal inflation is not actually likely to be correct or may be physically possible. However, I am certainly not an expert in this and my feelings about the topic not necessarily correct; yet my sense is these sorts of questions are not fully worked out.
Part of what makes this challenging for me is that the numbers are so much ridiculously bigger than the numbers in longtermist calculations, that it would seem that even a very, very small chance that it might be correct would make me think it should get somewhat deeper consideration, at least have some specialists who work on these kinds of topics weigh in on how likely it seems something like this could be correct.
Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.
Yes, I think there is a constant time factor. It is all one unified, single space-time, as I understand it (although this also isn’t an area of very high expertise for me,) I think that what causally separates the universes is simply that space is expanding so fast that the universe is are separated by incredible amount of space and don’t have any possibility of colliding again until much, much later in the universe’ s time-lines.
Yes, I believe this is correct. I am pretty uncertain about this.
A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.
Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.
Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!
And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away
Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.
Hi Magnus, thank you for writing out this idea!
I am very encouraged (although, perhaps anthropically I should be discouraged for not having been the first one to discover it) that I am not the only one who thought of this (also, see here.)
I was thinking about running this idea by some physicists and philosophers to get further feedback on whether it is sound. It does seem like adding at least a small element of this to a moral parliament might not be a bad idea, especially considering that making it only 1% of the moral parliament would capture the vast majority of value in terms of orders of magnitude (indeed, if at any given moment a single person who is encountering this idea just tried to “live in the moment” or smile for a second at the moment they thought of it, and then everyone forgot about it forever, we would still capture the vast majority [again, in orders of magnitude terms] of the value of the idea; but this continues to be true in every proceeding moment.)
Anyways, thanks for posting this, I am hoping to come back to my post sometime soon and add some things to it and correct a few mistakes I think I made. Let me know if you’d like to be involved in any further investigation of this idea! By the way, here’s the version I wrote in case you are interested in checking it out.
Hi Hans, I found your post incredibly helpful and validating, and much clearer than my own in some ways. I especially like the idea of “living in the moment” as a way of thinking about how to maximize value, I actually think this is probably correct and makes the idea potentially more palatable and less conflicting with other moral systems than my own framing.
Thank you, I appreciate your comment very much.
I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.
I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…
Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:
“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”
That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.
For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.
If I am mistaken in some way about this, though, please let me know!
On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement.
I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly.
It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction.
Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.
I am wondering if the winners of this contest are going to be publicly announced at some point?
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
I have been thinking about this kind of thing quite a lot and have several ideas I have been working on. Just to clarify, is it acceptable to have multiple entries, or are there any limit on this?
Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.