Note that the model is currently unverified, for reasons that I’ll mention below.
From the Challenge announcement:
In your post, if you could include some honest feedback on how you found working with Squiggle, that would be appreciated, but it’s not required.
And so here’s some feedback from my first ~10 hours of learning/using Squiggle (v0.3.0), in no particular order:
For complex calculations (on the order of the linked model), it’s currently very, very slow (with the caveat that I am likely not writing optimized code)
This is the primary reason I wasn’t able to completely verify the Squiggle model against Causal—it currently just takes too long to iterate
One thing that would help: turning “Autorun” off by default (and maybe enabling a keyboard shortcut, e.g. Shift+Enter to manually run). Opening a complex model in the Playground currently locks up the tab while the initial result is calculated
The error messages are funny, but often sparse and unhelpful
There’s not currently a scheme for importing existing code, either in the form of libraries or even static distributions (that wouldn’t need to be recalculated on each Autorun), making Squiggle somewhat unwieldly for large projects
It’s “missing” basic flow control like for and while, though with a little prodding you can do most everything with List.reduce and SampleSet.map
I also had (and forgot the specifics of) some syntax issues with if/else if/else, but ternary operators worked fine
No idea if I’m using SampleSet properly, but it seems like it should be the default behavior (as opposed to the way “symbolic” distributions currently work)
Consider this example, where a and b represent the same underlying estimate of e.g. yearly returns on an investment, and prod compounds returns for a given dist for n years:
A Squiggle newbie (like me!) might reasonably expect aprod and bprod to be the same, but they are not! (I think this is because a is resampled each time it is invoked, but b is “static” and the same samples are reused?)
Squiggle does a few other “weird” things under the hood, like interpreting arr[-0.5] as arr[0] without a warning
Both in the VS Code extension and the Plaground, it would be great if settings persisted between sessions (this, combined with default Autorun, was a major pain point)
Related, it seems like you can only have one “active” preview open in VS Code at a time, and switching between previews resets settings
Also in the VS Code extension, the syntax highlighting is occasionally broken in ways that don’t seem entirely fixed by this method (copy/paste my linked model to see some examples)
I found a few other randombugs, but was able to work around them for the most part
Overall, Squiggle is a really promising tool, and it is basically ready to go for small projects
I’m grateful to the QURI team for its development, and look forward to using it again in the future!
[Squiggle Experimentation Challenge] CEA LEEP Malawi
Link post
This is a work-in-progress conversion of LEEP’s cost effectiveness analysis from Causal, created for the Squiggle Experimentation Challenge.
Note that the model is currently unverified, for reasons that I’ll mention below.
From the Challenge announcement:
And so here’s some feedback from my first ~10 hours of learning/using Squiggle (v0.3.0), in no particular order:
For complex calculations (on the order of the linked model), it’s currently very, very slow (with the caveat that I am likely not writing optimized code)
This is the primary reason I wasn’t able to completely verify the Squiggle model against Causal—it currently just takes too long to iterate
One thing that would help: turning “Autorun” off by default (and maybe enabling a keyboard shortcut, e.g. Shift+Enter to manually run). Opening a complex model in the Playground currently locks up the tab while the initial result is calculated
The error messages are funny, but often sparse and unhelpful
There’s not currently a scheme for importing existing code, either in the form of libraries or even static distributions (that wouldn’t need to be recalculated on each Autorun), making Squiggle somewhat unwieldly for large projects
It seems like this is on the roadmap
It’s “missing” basic flow control like
for
andwhile
, though with a little prodding you can do most everything withList.reduce
andSampleSet.map
I also had (and forgot the specifics of) some syntax issues with
if/else if/else
, but ternary operators worked fineNo idea if I’m using
SampleSet
properly, but it seems like it should be the default behavior (as opposed to the way “symbolic” distributions currently work)Consider this example, where
a
andb
represent the same underlying estimate of e.g. yearly returns on an investment, andprod
compounds returns for a givendist
forn
years:A Squiggle newbie (like me!) might reasonably expect
aprod
andbprod
to be the same, but they are not! (I think this is becausea
is resampled each time it is invoked, butb
is “static” and the same samples are reused?)Squiggle does a few other “weird” things under the hood, like interpreting
arr[-0.5]
asarr[0]
without a warningBoth in the VS Code extension and the Plaground, it would be great if settings persisted between sessions (this, combined with default Autorun, was a major pain point)
Related, it seems like you can only have one “active” preview open in VS Code at a time, and switching between previews resets settings
Also in the VS Code extension, the syntax highlighting is occasionally broken in ways that don’t seem entirely fixed by this method (copy/paste my linked model to see some examples)
I found a few other random bugs, but was able to work around them for the most part
Overall, Squiggle is a really promising tool, and it is basically ready to go for small projects
I’m grateful to the QURI team for its development, and look forward to using it again in the future!