Hacker Newsnew | past | comments | ask | show | jobs | submit | WilliamDhalgren's commentslogin

that's an optimistic way to frame the situation; there's heavy opposition from the content industry to limits to geoblocking, and unsurprisingly the industry seems to have support within the commission (Oettinger at least - perhaps the fact he now left the Digital Economy and Society position helps).

I'm not confident at all they'll be able to completely ban geoblocking in one go. Hopeful they will at least poke some holes in it this round. Most parts of the single market needed a couple of revisions of a directive (quite a few years removed) before liberalizing a particular market fully..


> Oettinger at least - perhaps the fact he now left the Digital Economy and Society position helps).

It certainly has lowered my blood pressure–he's now in a much more important position. But that means that people will actually stop him if he continues to embarrass himself in the way he previously did.

There'll be exceptions for sports broadcasts, and maybe other content where there is a reason legitimising a difference in prices. For example: a french 24h news channel may be worth 20 Euro in France, but is more of an "yeah, all right, why not" buy for someone in Austria trying to freshen up his language skills.


Funny how "let the market decide the price" is suddenly a much less attractive option when it means making way less money.

Are French newspapers also suddenly a lot cheaper when sold in Austria? Of course not. They're more expensive in fact, because of transport and the cost of stocking a relatively unpopular item.

Now, these two costs don't matter for digital/streaming media. So imagine that they are negligible for French newspapers, too. Would they be sold in Austria for super-cheap, with just a slim margin on the price of paper they're printed on? I kind of expect the price to be roughly the same, actually.

The only reason that digital/streaming media get to pretend to be "different" is because they started out with the technological means to enforce market segmentation before regulation got wind of it. And that ability is incredibly profitable, to the detriment of the consumer. Which is why we regulate it. And now they don't want to give back something they really didn't have any right to in the first place.

Note that I'm not claiming either side of whether these streaming media should be the expensive local price everywhere, or the cheap everywhere-else price. Just that the fact that the market value of something differs extremely between regions, is an argument for regulation of market segmentation, not against.


curious: what kind of txt editor(s) would make sense for non-code writing, say a novel, say w markdown or something similar? Especially if the person writing is non-technical so stuff like emacs, vim, atom don't seem like a great fit? I'm thinking something non-obtrusive and with limited options, so that UI is not getting in the way of writing, but with very litte learning curve?

And, given the example of GeorgeRRMartin, perhaps something WordStar-like would fit the bill? If so, are there good free editors in the style of WordStar?

I have this friend, and I've seen the mess of unintended font changes, sizes, styles, bulleted lists, indentation changes. and all kinds of horrible stuff in his manuscripts when I had to repair some of that damage because it was getting unusable - think he'd appreciate a more focused alternative..


For serious book writing, with proper chaptering and everything, I would probably use GitBooks.

For anything else, even though it is not open source, Typora is to me the best visual markdown editor. It even makes me tolerate it being an Electron app, so it's pretty good.

https://www.gitbook.com/

https://typora.io/


I use Geany and really like it. Customizable. Even easy to e.g. type something in, press tab, and it inserts the output of your shell script or program. I use that to put the day's weather in my journal.


I know it's extreme, but Notepad. Literally and unironically; write in markdown, and it should work.


No. If for absolutely no other reason, through Windows 7 at least, Notepad allows you to undo only the last thing you've done. That's a perfect, one ingredient recipe for inevitably losing work.


Notepad++, PSPad, AkelPad, Notepad2, and SciTE are all free & good replacements for stock Notepad.


sounds like General Game Playing task. Well, MCTS alone is often used in that domain, and it wouldn't surprise me if Ke Jie was a weaker checkers player than just a brute MCTS, nevermind any attempts to use neural nets (if that would even make sense for checkers?)


> AlphaGo essentially baked good movies into value and policy network by playing millions of times.

I don't think that's a very good description of how AlphaGo was trained at all; you're essentially saying it merely overfits the training set, yet it clearly generalizes rather well to unseen board situations and still evaluates them sucessfully. No machine learning system would be found usefull if all it could do is merely memorize the training data.

Re the use of deep reinforcement learning, well for one the role of reinforcement learning in the first version of AlphaGo, the one described in the Nature paper was rather limited, and a small part of its training; it just made a ~3d KGS policy network into a ~5d KGS bot, and used to generate a training sample for the value net. If we had enough recorded human games to train the value net directly, that'd be an unnecessary step anyhow. And you could create such a training set w/o reinforcement learning since there are pure monte carlo bots stronger than 5d KGS - but that'd be far more computationally expensive.

But its still not really true that there aren't obvious applications of deep reinforcement learning - indeed robotics is one promising application, and that seems rather relevant. this paper initially demonstrated an impressive improvement in manipulative tasks, and you can prob follow its numerous citations for newer stuff: http://arxiv.org/abs/1504.00702

I do agree that this exact architecture in AlphaGo prob doesn't have applications beyond just teaching us how to play go better; it seems too specialized. I believe they mean it in just the vaguest possible sense; that the kind of deep algorithms demonstrating incredible performance in AlphaGo have diverse applications; but this should not come as a surprise to anyone even loosely following what people have done with deep learning in the past couple of years anyhow.


Go works precisely because it is a small closed system. An interesting match (from an AI perspective) would be a pro playing alphago on an unusual board (eg, one in the shape of a cat). The pro would take everything he knows about the game and apply it to the odd situation. Alphago is so specifically tuned that it cannot even handle any case except 19x19 (and maybe 9x9). Another interesting question would be small rules changes like "you may not play on any star points or any point directly touching them until turn 30".

Go has deep strategy, but it is very well defined in terms of what can and cannot be done and those rules are not particularly complex. Power grids in contrast are far more complex. There are thousands of rules, but also many more thousands of unwritten assumptions and case-by-case analysis. A final issue is that there exist unsolved and unrecognized problems.

The last AI winter (deep learning is just the latest rebrand) came from researchers overstating their accomplishments and making promises about general intelligence that could not be kept. Any claim about anything that requires general intelligence in the near future is undoubtedly overpromising.


> Alphago is so specifically tuned that it cannot even handle any case except 19x19 (and maybe 9x9).

Do you have any sources to back this assertion? It sounds unintuitive as I know object recognition sytema are usually trained on small images but they generalize well to arbitrary image sizes. What you are describing sounds like overfitting.


The paper itself repeatedly says that all 48 layers of the policy network are 19x19 matrices. To make the point though, they initially train alphago using actual games. After a hundred thousand or so training games, it's finally ready to start playing and learning. There are less than a couple dozen recorded games on larger boards.

If you haven't played go very much, you may thing that "it's just a bigger board". 19x19 is commonly used because it has an even balance of edge and center influence (in reality, edge influence seems to be slightly higher). With the 13x13, corner plays have overwhelming influence in the center. At 9x9, there is basically no center strategy at all. Normal strategies starting in the corners and expanding influence toward the center don't work as effectively with larger boards (the larger the board, the more this becomes true).

This is a much different issue than image recognition in that strategy doesn't scale in the same way that images do.


I'm a huge proponent of alpha Go and I think it is a revolutionary leap.

The key I think is,

> yet it clearly generalizes rather well to unseen board situations and still evaluates them sucessfully

I'm not sure this has been proven to be meaningful in a general sense, as you seem to also imply. Extrapolation can be a tricky answer subtle business. What about unusual board sizes, for which no training data exists? Or if you changed a rule? I'm sure deepmind would say the adversarial approach would work for these cases, but I'm not sure it would. Would be very interesting to see if humans could 'learn' a new state more quickly than the algorithm.

That might provide a hint that the algorithm is 'just' fitting the data well (with appropriate baked in regularization, of course). Or if it can more generally 'learn' given system rules.


Hm, well you are no doubt right that it doesn't generalize well to a change of rules. Reminds me of that game DeepZen played. It was trained with a komi of 7.5 and it played too soft and lost when in the actual match komi was 6.5 (or maybe it was the other way around?). A human does not have much trouble adapting to such small rules variations, but at least the version of DeepZen that played that match was hard-coded for that exact komi value, because that's what what used in all of its training examples, and wasn't given as a parameter. It shouldn't be a hard limit of the approach - indeed I think AyaMC was said to have been trained with some flexibility in its komi.

Still, I think AlphaGo does demonstrate amazing positional judgement in unseen board states, and that this is visible in the details of how it plays out particular situations. No two games are exactly alike - difficulty of go for computers is precisely in its extreme combinatorial explosion - and in particular tactical situations every detail of the situation matters. Yet you can see AlphaGo judging the correct sequences of moves, "knowing" how to make a particular group alive for eg, even when a particular other move seems more natural. And probably the most amazing thing about how it plays is how early it becomes completely sure that its got an advantage on the board, and how precisely it judges how much it needs to keep the advantage to the end. Every detail of the board is again relevant here, and basically no human would be so confident so soon. A go bot that couldn't adapt its tactics to unseen situation would be easy to beat; just ensnare it in a large complicated fight, and you're going to kill a big group and guarantee a win. Ofc people tried this in some masterP games, and turns out AlphaGo is tactically just as strong.

So, its basically like with other generalizations you can get from machine learning; a net trained on say ImageNet will generalize to different poses, occlusions, contexts and variations of objects similar to what it was exposed in training etc and still do a superhuman job of classifying such pictures, but will naturally be quite hopeless with completely unseen items. So too AlphaGo seems to know the game of go, generalizing from seen examples to correct judgements in other states, but would be quite hopeless if tested on even a slight variation of the game rules.


but humans obtain their watts very very inefficiently, so there's prob at least an order of magnitude to give for the same kind of system-level efficiency. Consider a field to feed a human vs a PV installation of the same physical size. And ofc there's all the other ways to obtain electricity...


The surprisingly efficient thing about biology is that it will take care of itself after released in an energy-dense environment...


Yes, whelks for example are known for their ability to flourish in supernovas.


That's ridiculous. Only tardigrades can survive supernovas.


Nono, you missed the REAL computer supremacy event then; it was the 50ish (!!!) games MasterP bot played in january against the field of top go professionals on some asian go servers. The bot went 50-0, crushing all opponents often in interesting ways.

FineArt is among the bots that have a positive score against top professionals, yes. But it also can lose to them too. MasterP showed that a computer can completely outclass humans!

After the series of games, it was revealed that MasterP is in fact AlphaGo. As far as we can tell from that series, AlphaGo is some serious ELO above other strong bots. So now the question remains - is it that dominant at longer time controls too, as those games were all quick. So that's this match.


Yes I didn't know that, wow 50-0! I mean, is there any doubt at this point that it will be dominant at longer times too? and the bots don't play each other?


It absolutely should be dominant in a long game too. Even if it loses some of its strength at such time settings, it shouldn't lose THAT much, it was just too superhuman. The play should be interesting though; Ke Jie both had access to other strong bots in China for a long time, and could study the records of the MasterP games; maybe he tries something interesting and gets interesting responses so we all learn a bit about the nature of go (haven't watched the recording of this game yet, just woke up).

There was a computer bot championship recently (UEC cup), but AlphaGo declined to participate. FineArts won, DeepZen was second. Think there's a few other chinese bots that could be stronger than Zen but didn't participate. So the real competition didn't bother to show up really.


Fascinating, thanks for the answers. The bot improvements in the last few months have been so radical I can't begin to imagine how much it must be disrupting the strategic landscape and player status.


there were actually 60 games and the final score was 60-0


yup indeed; and also it started already in December, not just January as stated above. Should've double-checked my memory before writing stuff...


> There are quite a small number of old martial art that's pretty decent still.

could you mention a few more? And distinguish if its possible which of those were generally practiced as highly competitive full contact sports (within their rulesets of forbidden strikes etc), and which mostly as a martial art?


yeah, but apparently barely declines untill say 60+. Speed suffers however.

"The adult years were remarkable in that complexity remained at a high level for a protracted period, in spite of a slow decrease of speed during the same period. This suggest that during the adult period, people tend to invest more and more computational time to achieve a stable level of output complexity. Later in life (>70), however, speed stabilizes, while complexity drops in a dramatic way."

and

"These speed-accuracy trade-offs were evident in the adult years, including the turn toward old age. During childhood, however, no similar pattern is discernible. This suggests that aging cannot simply be considered a “regression”, and that CT (completion time) and complexity provide different complementary information. This is again supported by the fact that in the 25–60 year range, where the effect of age is reduced, CT and complexity are uncorrelated (r = −.012, p = .53). These findings add to a rapidly growing literature that views RIG tasks as good measures of complex cognitive abilities [21, for a review]."


It almost sounds like people try harder and harder to keep up with societal expectations of continued "sharpness"... until they hit some age where cultural norms say they aren't expected to be sharp any more. And then they stop trying, which causes an outsized decline as their lowered expectations of their own capability, feeds into lowered capability, which feeds back into further-lowered expectations.


could be, I really don't know. I can imagine perfectly benign scenarios too; if our long-term memory grows with time, maybe there's just more possible connections/associations to filter out with time too, so that decision just becomes harder, with only the deeper old age being simply tissue decline, the exhaustion of cognitive reserves etc.

Somewhat relatedly, I think I heard on some old Skeptic's guide to the universe episode about an experiment with some nootropic, maybe it was a racetam though I think it was mondafinil, and this kind of reaction time to response accuracy tradeoff was observed, on young healthy adults. Those with less correct answers slowed down, presumably concentrated better and gave better answers, but those already good at whatever the task was simply were slower and yet no better.


Is it possible to get a link to that episode?


ah, I'm sorry but I really don't remember. this was some years back.



hey, that's not bad; didn't even expect SGU had such detailed transcripts.

But actually looking for it now, the particular study I had in mind seems to be from a science or fiction segment from this episode:

http://media.libsyn.com/media/skepticsguide/skepticast2014-1...

Interesting to see how Steve's assesment of modafinil evolved between the two episodes


Thank you so much for your time. I was really curious.


>And then they stop trying, which causes an outsized decline as their lowered expectations of their own capability

As if there are no physiological factors at play?

I mean, tons of studies have shown declining mental capacity, degenerative diseases increase with age, etc. Even in animals that have no, or much less, "societal expectations for sharpness".


well, if only categorisation of extremism online were better. Recently I was searching youtube with the word "Lokiarchaeota", an exciting very recent find in the origin of the eukaryotes, hoping for a scientific lecture, and was mostly getting creationist preachers as results.

Who the hell searches for priests by naming obscure microbes ?? And sadly this is hardly unique; i've been bombarded by UFOs, reptillian aliens and similar outlandish nonsense in search of factual science regularly. And its clearly not doing a reasonable job at predicting my interests at all, for those are not items I'd view.

Why google's imbecilic algorithms promote and push such extremist trash on the general populace is beyond me, degenerating many neutral search results page to worse than reading the yellow press, but this does sadly seem to be what's currently happening, and would reasonably lead many away from wonders that Internet could offer instead.


This is a perfect example of how centralization is harmful.

YouTube has too much content without enough effort to categorize that data. YouTube content limits to whatever is the most appealing to the widest audience. Naturally the extreme, and even the absurd are most at home there.


as far as I understood the starshot discussions, they weren't optimistic about managing to pack any interferometry equipment in such a small package, so there'd be little hope of proper chemical analysis... There was some discussion of what could be figured out from color alone, w/o spectra -- which sounds rather desperate.

A big telescope on earth/in orbit could possibly do more actual work on figuring out stuff about a nearby star than such seconds-long flybys of such wimpy payloads, which is all you get even after you manage the non-trivial challenges of building the phased array, surviving the ISM, establishing communications somehow etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: