Hacker Newsnew | past | comments | ask | show | jobs | submit | ozy's commentslogin

I really like the idea of openscad, or this, or the many alternatives. But when I say a shape with these and these dimensions, the next shape should attach to it somewhere. And then I want to say: chamfer all outside edges. But in all these programs, it's me redoing the math in my code, computing where the shape goes. As for chamfers, I just give up ...

> chamfer all outside edges

FreeCAD can do this. So can all of the proprietary parametric CAD programs I've ever used, some of which (PTC OnShape, Siemens Solid Edge, Autodesk Fusion) have usable free tiers available.


If you are a programmer OpenSCAD is easier to learn. However you will quickly run into limits. Just a few hours of a FreeCAD tutorial and I was already seeing how I could do things I'd never attempt in OpenSCAD. FreeCAD has a reputation of not being great, but I'm not far enough into it to learn the limits - things I can't figure out feel like things I could learn, in OpenSCAD the things I couldn't figure out where because they were too complex - I could but the code wouldn't be readable so there was no point (not to mention math errors).

FreeCAD is designed for the things real designers really do. OpenSCAD is designed for the things mathematicians do.


BOSL2 is a great library to use with Openscad. It provides a bunch of primitives shapes functions with chamfer/rounding parameters.

Math doesn't go away tho


Yup, the attachments with BOSL2 are really great. You can do a ton without much math at all, TBH.

https://github.com/BelfrySCAD/BOSL2/wiki/attachments.scad#se...


Build123d can do chamfers. You can also do relative positioning by selecting positions from the first shape. (there's various ways to do that)

Useless unless the logical operators receive their rhs unevaluated. And that is generalized as a language feature.


A general language feature would be fexprs, or call-by-name (which can be combined with call-by-value using call-by-push-value).

In Kernel[1] for example, where operatives are an improved fexpr.

    ($define! $if
        ($vau (condition if-true if-false) env
            ($cond 
                ((eval condition env) (eval if-true env))
                (#t (eval if-false env)))))
$vau is similar to $lambda, except it doesn't implicitly evaluate its operands, and it implicitly receives it's caller's dynamic environment as a first class value which gets bound to env.

$lambda is not actually a builtin in Kernel, but wrap is, which constructs an applicative by wrapping an operative.

    ($define! $lambda
        ($vau (args . body) env
            (wrap (eval (list* $vau args #ignore body) env))))
All functions have an underlying operative which can be extracted with unwrap.

[1]:https://ftp.cs.wpi.edu/pub/techreports/pdf/05-07.pdf


That is why you cannot ask the room for semantic changes. Like “if I call an umbrella a monkey, and it will rain today, what do I need to bring?”

Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious?


> Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc

It's implied, since they enable someone who does not know Chinese to respond equally well to questions as someone with Chinese as a native language.


> if I call an umbrella a monkey, and it will rain today, what do I need to bring?

I'm not even sure what you are asking for, tbh, so any answer is fine.


Maybe consciousness is exactly like simulated fire. It does a lot inside the simulation, but is nothing on the outside.


Most of those are abstractions, but not a runtime overhead. NonNull even enables an optimization not available to most other languages.

And you can wonder, is this accidental complexity? Or is this necessary complexity?


Why do statistics like GDP in absolute numbers?

The only thing that graph shows is that China was dirt poor in 1995, and is now still only at 25-35% of USA levels.


GDP per capita is a measure of a country's standard of living, raw GDP is a measure of a country's power relative to other countries.


Why would AGI be bound to human made models? Why can it no develop its own?


Recursive insight is possible with a model that self trains, but right now that would result in a detour into unreality. Perhaps with the right systems of vetting prior to incorporating new data into the retraining set.

Right now they just get stupider if you train them on their own output, which suggests that the quality of the data available in the training set is higher than the quality of output produced by the model as a general rule. The fidelity is < 1.0 . Apparently, it is possible to achieve fidelity >1 (the growth of human knowledge) but our algorithms are not so great at this point, it seems.


Not necessarily. For example Anthropic's ConstitutionalAI (CAI) leverages the model to substitute human judgments in RLHF, effectuating essentially RLAIF. CAI information is used to fine-tune the Claude model.

Broadly speaking, you require statistics at echelon N+1 when you are at rung N. We can amplify models by providing them additional time, self-reflexion, demand step by step planning, allow external tools, tune it on human preferences, or give it feedback from executing a code, or from a robot.


Yeah, it makes some sense that you could use a more intense introspection to train weaker ones… I wonder what the human analogue for that looks like.

Maybe working up a proof and then quizzing yourself on it?

As long as we get >N supervision and the difference is more than the model retrograde, it seems that could work. But it seems like there is a definite limit to that. The N-n1 difference will only stay above the improvement delta up to a point.


The model would learn from feedback, not just regurgitate the training set, as long as the model is part of a system that can generate this feedback. AlphaGo Zero had self play for feedback. Robots can check task execution success. Even chatting with us generates feedback to the model.


I think a discussion on induction is best done by splitting the resulting models in two: models based on statistics, and models based on (abstract/iconic) simulation. (Eg all swans are white vs all swans lay eggs.)

Since our reality is "atoms and void", and since the sun and earth are huge configurations of atoms locked together in a stable pattern, the sun coming up tomorrow has nothing to do with statistics. And bayesian reasoning plays no role in our predictions or certainty. At least not directly. It does indirectly, by asking what perturbation, what intervention, can stop this from happening? And how likely are such events?


What we know by experience, by abstraction, or empirically, are three distinct modes of knowing. Experiences are directly known and always true. (Experiences might reference other potentially false things, and might be false indirectly.)

That resolves the whole Mary knowledge problem. Books cannot inject that kind of direct knowledge. Thus the claim "mary knows everything" is either false, or only true for a smaller domain.

One can think of analogies, like tamper resistant logs, or unique CPU states while doing static analysis vs running a program.

All in all, the non-physicalist conclusions are widely overdrawn. More over, for what it is worth, Jackson himself no longer thinks this argument is a good one.


It is a virtual time machine. You can run time forwards, explore various possible futures, then pick the future aligned most with your preferences, capabilities, and taste for risk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: