Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is unconventional computing popular?
14 points by Agent101 on May 10, 2010 | hide | past | favorite | 13 comments
Are people excited and interested about the possibilities of things like autonomic computing/amorphous computing and other non- Von Neumann style systems?

There seems to be a lack of coverage in geek news, despite a healthy academic community/journal etc and I was wondering why.

I've got my own reasons for not being enthused about the current field, but I am curious what other people think.



I, for one, am ridiculously excited by the idea of an entirely new computing paradigm. I know some at HN abhor anything that isn't practical this very second, but I think they just lack imagination. I'd be interested in any articles submitted in the vein of non-traditional computing.


Plenty of people here are very interested in 'non conventional computing', in fact I've learned more about this from the people here than from any other source on the web.

Here are some searches that you could try:

http://www.google.com/search?q=site%3Anews.ycombinator.com+p...

http://www.google.com/search?q=site%3Anews.ycombinator.com+c...

http://www.google.com/search?q=site%3Anews.ycombinator.com+c...

http://www.google.com/search?q=site%3Anews.ycombinator.com+f...


It would be more exciting if there hadn't been so many in the recent past, like quantum computing that's been going to have a practical application Real Soon Now for the past decade.


One problem with any new technology is that the 'get rich quick' crowd and their marketeers will jump on it to avoid missing the next possibility to easy riches. They'll over-hype the product, create unrealistic expectations and move on to the next hot thing when they've given it a bad name.

Modern day locusts is what they are.


Microcontrollers (e.g., the Atmel AVR chip line which is the basis of the Arduino open source hardware platform) are often modified Harvard architectures, where the instructions are read from flash and SRAM is used for volatile stack/heap memory.

There is a lot of activity in this area which has been dubbed "physical computing". See e.g. O'Reilly's Make quarterly and Sparkfun, which apparently does > $10 million in sales annually from selling electronics components and kits to hobbyists. I'm eagerly awaiting my first Arduino starter kit from them! ;)


Oh, of course I'm excited and interested (although not enough to follow it so closely as to be able to guess what you're de-enthused about) but I think it's still a recondite enough area that most HN readers won't know to upvote it.


Have a browse through the table of contents of the International Journal and you might get the same impression as me.

http://www.oldcitypublishing.com/IJUC/IJUC.html

Basically it is too unconventional (chemical), faddy and not focussed on producing something usable by the average geek.

That sort of stuff is still interesting (for computing in odd situations) but is not what I am looking for. I suppose I wondering why there isn't the computer equivalent of a space elevator. That is something most people know about that can't be done with current tech but is physically plausible (but might still might be too hard to do). Something that might spark the equivalent of the spaceward foundation, but for computers.

The fleet architecture represents a different face of unconventional computing. One that geeks can get behind. However it concentrates on speed of processing. Looking at the costs of computing, increasing computational power per watt or flops is useful but does not address the dominant cost of owning and running a computer. The dominant costs, I think, are the costs of learning the system, administering them and programming them. This is not addressed by either of the above threads of research.

I have my own odd-ball ideas. Which I'm excited about. I just wanted to gauge opinion of HN type people.


It seems like what you're interested in is more like UI or UX research than hardware innovation? The universality of the machine, strengthened by the ubiquity of compilers and software written in high-level languages, almost totally disconnects the user experience from the computing hardware, except for efficiency differences; instead it's tied to the I/O devices and the user interaction techniques, and increasingly, to the data the user is interacting with.

But I do see a fair bit of discussion of researchy and novel UIs here, don't you? On the front page right now I see Heroku (reducing the cost of administering systems), Hummingbird (real-time web site analytics visualization), Android vs. iPhone (which is largely about ubiquity and UI), Chatroulette, the death of files in the iPhone/iPad UI (which sounds like goes right to the core of the "dominant costs" you're talking about), Nielsen's report on iPad usability, and UI design in Basecamp. And that's just above the fold!


There are three ways to tackle the human costs of computing.

1) Make the things humans have to do easier. UI/UX

2) Reduce the number of things humans have to do. While all modern hardware can calculate the same things (are universal) they have different security models which can affect how much maintenance the user has to do. Take capability based security, an old idea implemented in hardware in the IBM AS 400. Languages (E, Joe-E) based on it are currently being touted as a way to reduce the risk of malware infection, even if malware does get on the system it can't do much because the language VMs operate under a principle of least privilege.

If we are changing the Arch for performance (e.g. fleet) and can't make use of the performance with standard software we may want to change it in this way as well, to take advantage of the system.

To give a concrete example of how computer architectures can be changed for the better. If windows had capability based security at the low level it could pass bits of memory to the user land process by sharing a capability that gave it write access. Then the user land process could populate it, once it had finished and the kernel wanted to read it, they could revoke the the writeable permission. This would prevent this sort of attack

http://news.ycombinator.org/item?id=1331025

See this for an intro to the philosophy

http://www.erights.org/talks/virus-safe/index.html

3) Make the computer do the work for the human. Yes this is mainly an AI problem, but it also an architecture problem. If you want the system to manage things like your graphics card drivers for you, you have to make some decisions about the hardware. Which programs are allowed to try and manage the graphics card drivers, how can the user communicate what she wants in terms of graphics card drivers in a way that the computer will find unambiguous.

So yep, UI and UX, is important but it is only one possibly angle of attack, and not the one I'm interested in. Because people are doing fine work on it, while the others languish a bit.


> it could pass bits of memory to the user land process

> by sharing a capability that gave it write access.

> Then the userland process could populate it,

> once it had finished and the kernel wanted to read it,

> they could revoke the the writeable permission.

> This would prevent this sort of attack [apparently,

> confusing auditors with TOCTOU attacks on system call arguments]

Virtual memory mapping hardware is already roughly a capability system. The CPU doesn't maintain a list of ownerships and permissions for every page of physical memory; it puts capabilities to those pages into page tables. That's how KeyKOS was able to run efficiently on stock hardware.

Capability systems are indeed better for security in several ways, but this isn't one of them. The problem here is that the memory page is shareable between different user threads. You can solve this problem in a variety of ways, including the one you suggest. However, unmapping the page that a system-call argument lives in before invoking an auditor does not constitute implementing a capability system.

To a great extent, it seems like the move toward web apps is exactly a move toward a different security model in order to reduce the maintenance the user has to do, a model in which most apps are fairly limited in their authority. The same-origin policy still falls far short of full POLA, but it's a step. The project in this area I'm most excited about is Caja, which is what MarkM's working on these days.


I thought about mapping. Wouldn't you get into trouble if the section of memory still had to be readable during the time it is used by the kernel if you unmapped it? Or can you modify a read-write map to a read-only map? I'm just getting into windows internals.

Heh, I didn't know there were fellow people interested in keykos type stuff here. I'm fairly new to that and more interested in the 3rd thing you can do to reduce cost of ownership, having an adaptive computer background.

If you submit a link to caja here let me know and I'll upvote it. The cap-like stuff that the Marks were working on for delegating authority to web apps was also interesting. It does reduce the amount of maintenance the user has to do, they still have to pay for the web apps though, so depending upon the income of the user and cost of the service it might not reduce the total cost by much.


How about disappointed? I've seen intriguing non-Von architectures for decades, and they always lose out to Moore's Law and the fact that 1,000X more engineering resources are invested in Von Neumann architectures.


Build a widget with it I can buy, or ship a piece of software written in/with it, and I think there will be much more interest it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: