I really like the idea of openscad, or this, or the many alternatives. But when I say a shape with these and these dimensions, the next shape should attach to it somewhere. And then I want to say: chamfer all outside edges. But in all these programs, it's me redoing the math in my code, computing where the shape goes. As for chamfers, I just give up ...
FreeCAD can do this. So can all of the proprietary parametric CAD programs I've ever used, some of which (PTC OnShape, Siemens Solid Edge, Autodesk Fusion) have usable free tiers available.
If you are a programmer OpenSCAD is easier to learn. However you will quickly run into limits. Just a few hours of a FreeCAD tutorial and I was already seeing how I could do things I'd never attempt in OpenSCAD. FreeCAD has a reputation of not being great, but I'm not far enough into it to learn the limits - things I can't figure out feel like things I could learn, in OpenSCAD the things I couldn't figure out where because they were too complex - I could but the code wouldn't be readable so there was no point (not to mention math errors).
FreeCAD is designed for the things real designers really do. OpenSCAD is designed for the things mathematicians do.
$vau is similar to $lambda, except it doesn't implicitly evaluate its operands, and it implicitly receives it's caller's dynamic environment as a first class value which gets bound to env.
$lambda is not actually a builtin in Kernel, but wrap is, which constructs an applicative by wrapping an operative.
Recursive insight is possible with a model that self trains, but right now that would result in a detour into unreality. Perhaps with the right systems of vetting prior to incorporating new data into the retraining set.
Right now they just get stupider if you train them on their own output, which suggests that the quality of the data available in the training set is higher than the quality of output produced by the model as a general rule. The fidelity is < 1.0 . Apparently, it is possible to achieve fidelity >1 (the growth of human knowledge) but our algorithms are not so great at this point, it seems.
Not necessarily. For example Anthropic's ConstitutionalAI (CAI) leverages the model to substitute human judgments in RLHF, effectuating essentially RLAIF. CAI information is used to fine-tune the Claude model.
Broadly speaking, you require statistics at echelon N+1 when you are at rung N. We can amplify models by providing them additional time, self-reflexion, demand step by step planning, allow external tools, tune it on human preferences, or give it feedback from executing a code, or from a robot.
Yeah, it makes some sense that you could use a more intense introspection to train weaker ones… I wonder what the human analogue for that looks like.
Maybe working up a proof and then quizzing yourself on it?
As long as we get >N supervision and the difference is more than the model retrograde, it seems that could work. But it seems like there is a definite limit to that. The N-n1 difference will only stay above the improvement delta up to a point.
The model would learn from feedback, not just regurgitate the training set, as long as the model is part of a system that can generate this feedback. AlphaGo Zero had self play for feedback. Robots can check task execution success. Even chatting with us generates feedback to the model.
I think a discussion on induction is best done by splitting the resulting models in two: models based on statistics, and models based on (abstract/iconic) simulation. (Eg all swans are white vs all swans lay eggs.)
Since our reality is "atoms and void", and since the sun and earth are huge configurations of atoms locked together in a stable pattern, the sun coming up tomorrow has nothing to do with statistics. And bayesian reasoning plays no role in our predictions or certainty. At least not directly. It does indirectly, by asking what perturbation, what intervention, can stop this from happening? And how likely are such events?
What we know by experience, by abstraction, or empirically, are three distinct modes of knowing. Experiences are directly known and always true. (Experiences might reference other potentially false things, and might be false indirectly.)
That resolves the whole Mary knowledge problem. Books cannot inject that kind of direct knowledge. Thus the claim "mary knows everything" is either false, or only true for a smaller domain.
One can think of analogies, like tamper resistant logs, or unique CPU states while doing static analysis vs running a program.
All in all, the non-physicalist conclusions are widely overdrawn. More over, for what it is worth, Jackson himself no longer thinks this argument is a good one.
It is a virtual time machine. You can run time forwards, explore various possible futures, then pick the future aligned most with your preferences, capabilities, and taste for risk.
reply