Hacker Newsnew | past | comments | ask | show | jobs | submit | TheCartographer's commentslogin

Professional GIS user here. Most of my high resolution terrain models are well within the capacity of even a modest GPU to load into WebGL and run on a browser without breaking a sweat. Terrain models are static by nature and don't require a lot of horsepower once they are loaded.

The struggle comes in reading the data from storage, where several minutes can be spent loading in a single high resolution raster for analysis/display. When I built my own PC this year, I splurged on an M.2 SSD for the OS and my main data store. Best decision I ever made for my workflow - huge 3D scenes that formerly took minutes to load on a spinning platter now pop up in seconds.

This thing would probably be the bees knees for what I do. Shame it starts at $10k (and that it's AMD so no CUDA, so no way to justify it at work for "data science" :-/).


Any idea why Nvidia dominates in "serious" GPGPU applications? I remember people mocking them for refusing to adopt OpenCL, and when they finally caved their implementation performed far worse than AMDs. How did they win people over? Did they give out a bunch of free GPUs to universities or something?


CUDA mainly. It's fast (faster than OpenCL) and NVIDIA is really good with their software. CuDNN for deep neural networks is almost an industry standard. Nvidia understands software and markets better, while AMD sits on their butts for too long. Granted AMD always come out with a good open source solution that is always just a bit worse and very late. NVIDIA tries to create markets while AMD mess up and end up becoming followers. Shame really.

Edit: this is a good step though. AMD should be pushing the envelope and hopefully with Zen, they can actually realize some of the gains of HSA (which they tried to pioneer but it wasn't so useful since Bulldozer isn't that good)


Shame because AMD seems to pop out cards with higher max tflops but seems to fail hard in the software department


CUDA is not [unqualified] "faster than OpenCL". NVIDIA does design software better than AMD, without a doubt. I think it's likely that NVIDIA decided to push CUDA and lag OCL support if their customers balked at having to port between the two. It's not that hard to port IMO, they're extremely similar.


OpenCL IMO is an ugly API born of the equivalently ugly CUDA driver API because Steve Jobs got butthurt at Jensen Huang for announcing a deal with Apple prematurely. Downvote all you like, but as John Oliver would say "That's just a fact." I witnessed it secondhand from within NVIDIA.

In contrast, OpenCL could have been a wonderful vendor-independent solution for mobile, but both Apple and Google conspired independently to make that impossible (ironic in Apple's case because of OpenCL's origin story and idiotic in the case of Google and its dreadful Renderscript, a glorified reinvention of Ian Buck's Ph.D. thesis work, Brook).

Fortunately, AMD appears to have figured out OpenCL has no desktop traction and they have embarked on building a CUDA compiler for AMD GPUs called R.O.C (Radeon Open Compute). They have also shown dramatically improved performance at targeted deep learning benchmarks. It's early, but so is the deep learning boom.

The wildcard for me is what Intel will decide to do next.

The big win IMO is vendor-unlocking all the OSS CUDA code out there.

https://github.com/RadeonOpenCompute https://techaltar.com/amd-rx-480-gpu-review/2/


Jobs might've been butthurt and it might've made better incentive but nobody likes to sole-source critical technology elements.


It would have been fantastic if Intel had stopped beating the linpack horse a lot sooner and built a viable competitor to GPUs by now. Not this timeline though alas... Maybe 2020?


They did give out a bunch of free GPUs to universities, but more than that, they have invested heavily and deliberately in HPC: community engagement, better SDKs (it's been almost a decade since I've first been able to build a Linux executable against CUDA, and that code should still work), server SKUs sold in the server channel (AMD didn't bother to design SKUs for servers until it was too late). Other things that solidified NVIDIA's lead were the AWS win (in 2010, they managed to partner with AWS to bring GPU instances to market - and those are thriving) and the 2011 ORNL Titan win (important for the HPC community mindshare).


They also partner with universities to design courses around Nvidia products, so they're better at funneling talent. Or at least it seems so. I don't know about AMD programs.


Better tooling.

Also they accepted the world has moved on and CUDA accepted C, C++ and Fortran since day 1.

Shortly thereafter they made the PTX Assembly format available and any language could easily target it for GPU execution.

OpenCL got stuck in a world of C with the same runtime compilation model as GLSL.

Only after loosing to CUDA they cave in and created SPIR and the SysCL C++ implementation.

And they have done it again with Vulkan.

While Metal and DX are object based, even the shader languages are C++ like, Vulkan is all about C.

Recently they decided to adopt NVidia's C++ wrapper.


I don't know anyone who thinks Metal is better than Vulkan. In fact I've heard only the opposite.

And the reason why it's C is so that you can bind from multiple languages. DirectX doesn't have this problem because it's not plain C++, it's COM--which is designed to support bindings from multiple languages. But COM is effectively Windows only, and Vulkan needs to be platform independent. So Khronos made the correct choice here.

As you acknowledged, they also created an idiomatic C++ wrapper, so I don't see what your complaint is at all. Khronos did the correct thing every step of the way.


With the rise of middleware engines on the industry the actual APIs are even less relevant nowadays than a few years ago.

Besides the big studios already abstract the graphic APIs on their in-house engines anyway, or they outsource to porting studios.

Usually the HN community, which is more focused on web development and FOSS, seems to miss the point that the culture in the game industry is more focused on proprietary tooling and how to take advantage of their IP.

The whole discussion around which API to use isn't that relevant when discussing about game development proposals.

It is more akin to the demoscene culture, where cool programming tricks were shown without sharing how it was done, than the sharing culture of FOSS.

If NVidia hadn't made their C++ wrapper available, I very much doubt Khronos would have bothered to create one of their own.


> With the rise of middleware engines on the industry the actual APIs are even less relevant nowadays than a few years ago.

> The whole discussion around which API to use isn't that relevant when discussing about game development proposals.

OK, so if the graphics API doesn't matter, why did your parent comment participate in the graphics API war?

(BTW, I agree with you that the graphics API doesn't matter too much anymore. But I think if you're going to attack Vulkan, you should do so based on specific technical reasons.)

> If NVidia hadn't made their C++ wrapper available, I very much doubt Khronos would have bothered to create one of their own.

Khronos isn't a company—it's a standards body. As NVIDIA is a member of Khronos, "Khronos" did bother to create a C++ API.


Well, for me being based on pure C instead of the OO interfaces of other APIs it isn't something that makes me wish to use it.

I rather use APIs that embrace OO, offer math, font handling, texture and mesh APIs as part of SDKs instead of forcing developers to play Lego with libraries offered on the wild.

> Khronos isn't a company—it's a standards body. As NVIDIA is a member of Khronos, "Khronos" did bother to create a C++ API.

I guess that is one way of selling the story.


> Well, for me being based on pure C instead of the OO interfaces of other APIs it isn't something that makes me wish to use it.

But it's not "pure C". C is just the glue. You can use the C++ API if that's what you want, and you have a completely object-oriented API.

Can Linux never have an "OO interface" because all syscalls trap into a kernel written purely in C? Of course not, that would be silly. The same is true here. If you program against an object-oriented C++ interface, then you have a fully object-oriented API.


Anything that increases our dependency on C is bad.

I find interesting that a Rust designer thinks otherwise.


Anything that increases our dependency on C++ is much, much worse.

You can easily call into something with a C ABI (er, OS ABI designed around C, or whatever is the correct technical name) from any language. Try that with C++ :D


No problem on Windows thanks to .NET and COM.

C++ provides better tools to write safer code than C ever will.

I would rather be using Ada, Rust, System C#, D, <whatever safe systems programming language>, but until OS vendors start providing something else, C++17 will have to do.

Since 1994 I only write C code when forced to do so.


Any opinions you'd like to share about using Metal? I can't seem to find much online about developer's experience with Metal, which seems bad...


I think it was a combination of marketing and great developer tools. I'm not in that business so I don't know first-hand, but former colleagues have said that Nvidia provided tons of tools, examples, and resources, while AMD basically completely neglected developers. This is changing now, but at this point it's too little, too late.


As someone that recently switched from nVidia to AMD, I can also say that nVidia products are just plain better.

AMD is trying to fix it, but also now "too late", their drivers, that frankly, are crap, on all platforms, AMD drivers are very, VERY buggy.

Also, AMD GPUs need excessive amounts of power, at first I didn't considered this a problem, until I bought my AMD GPU and noticed it is constantly throttling and causing stutter even in professional software, due to power limits, it also bit AMD in the ass during the RX 480 launch (where the excessive power usage went beyond the motherboard limits, and their "Fix" was make the driver instead request power beyond the specification of PSU cables instead, or allow users on Windows, to enable a harder power limit, making it throttle even more).

I had hope AMD would "scare" nVidia into improving, into stopping their shady business practices and just improve their business, but after actaully buying AMD product, and interacting with their crappy support, crappy community, crappy distribution network (it was very hard to get the card!), I concluded that AMD has a loooong way before they make nVidia react, AMD is too far behind in all aspects, and the only reason they are still competitive, is because they sell very power-hungry beefy GPUs for cheap prices, achieving a reasonable performance per dollar, but if you compare their products ignoring that, they are just junk (both in the hardware and software sense).


You sound incredibly biased, even if unconsciously. I've not had even close to the same experience you have. You even feed into the power draw crap.


NVIDIA supports developers better than AMD does (on the whole). So they got CUDA working first, and helped developers get their computational frameworks running on CUDA easier and faster. That gave them several hardware generations of head start. If OpenCL 2.x becomes easier to support and more performant as Khronos claims then maybe there'll be a shift.


ATi has always been bad at openGL performance in 3d content creation apps. That's the reason. Back in 2001 Nvidia ranked supreme and ATi was buggy. Seems these days both can be buggy but Nvidia is still #1. The 6800 was a industry changer.


Well it will be a almost dead set certainty to run OpenCL, hopefully it will have bindings / extensions to handle the SSD directly from your OpenCL code. Then all you need to do is port the code.


The dev kit is $10k. I imagine the final product will be cheaper.


They will inevitably plummet in price over the next few time units. Look forward to fun!


Write it in R?


Yes, mixing languages is a pro's version of this. Earn 10 points for every language and 20 points extra for every interface between them. Macro languages like XML do count.

Additionally do some build system breakage with automatically partially generated code for extra credit.

Of course if you create some DSL or XML, never include schema or any kind of documentation that is not the parser.

And absolute worst: include optimizrd binary only code - no documentation and sources. Reverse engineering is fun!


nah. R is actually not bad. i would go for nodejs. or any sort of dependency injection, so no one would ever know what method behind an interface is being called. not to mention kdb+, which is write-only by design.


"Baghdad Bob is not a good behavorial model for a CEO to follow."


Simply owning a fake badge doesn't even prove intent, let alone being actual evidence that it happened

Well, not in the US. Many other countries take a dim view towards citizens owning anything that even remotely resembles the accruements of official power and authority.


He may have broken the law by possessing it, but that's not the same as "playing cop". What he actually did doesn't change according to the jurisdiction, only its legal status.


Perhaps that is the fault of a deficient classical education, rather than the merit (or lack thereof) of Bergson's ideas? To extrapolate from your example argument, there are thousands of historical figures you have never heard of. And yet, their decisions and actions have had a profound effect upon trajectory of history, and the way the world appears to you today. Are they unimportant just because you have yet to personally learn about them?

Or perhaps it is a lack of imagination: Bergson was certainly a large part of the intellectual milieu in which Einstein was working. His thoughts pushed Einstein in certain directions Einstein might not have gone otherwise. Even if Bergson is unremarkable for his own work, surely he is important for no other reason than his impact on the thought of the remarkable and memorable theories of Einstein?

Taking 'I haven't hear of them' as your starting point of historical importance seems like an intellectually lazy argument to me. Each to their own, however, and you are certainly entitled to your opinion.


> Perhaps that is the fault of a deficient classical education, rather than the merit (or lack thereof) of Bergson's ideas?

My point was not about merit, it was about cultural impact.

But having had some time to think about it, I'm probably wrong. Kuhn's concept of the "paradigm shift" is more recent than Bergson. While not directly challenging a specific theory the way Bergson did, it has certainly captured the imagination of the public and of scientists.

That said, it seems that Bergon's star has faded over time. I looked up a few lists of the most influential modern philosophers and he is not usually highly ranked.


Mmmm, sort of. It's probably easier to understand the why of Bergson's theories in their historical context. In the early 20th century, General Relativity and Special Relativity had space and time 'figured out' and quantum mechanics wasn't really a thing yet.

The common train of thought post-Einstein but pre-quantum mechanics was that physics was close to a theory of everything: that the universe could be described with a set of deterministic equations and everything, including human behavior, could be successfully predicted from the beginning of time to the end of time.

Bergson's objections to Einstein are rooted in the concept of free will. They centered on Einstein's handling of time as another spatial concept. Physics would never be able to quantify human behavior, according to Bergson, because Einstein used the wrong model of time. Time (again, according to Bergson) isn't a countable and finite dimension like space is - and thus Einstein was wrong.

Bergson was also had no small amount of mathematical understanding, although he certainly wasn't at Einstein's level. Prior to this debate, he wrote an entire book about Einstein's Twins Paradox, and why it the premise it started from - that of a countable, space-like time, was wrong.

One reason that the Bergson-Einstein debate impacted the Nobel committee to such a degree was academic politics. At the time, many thought that physics had everything figured out and it wasn't long until everything, including human behavior, could be predicted using the scientific methods of physics and relativity.

Not unsurprisingly, a LOT of non-physicists had a problem with this idea.

Now off to read to the article, so I can see what was actually discussed there....


So, I worked with Bergson's texts quite a bit in grad school, as he is heavily in vogue at the moment in certain disciplines. To break down his argument to its essentials, the whole concept he is railing against is the spatialization of time. That is, for Bergson, time cannot be subdivided into the "mechanistic time" of the ticking clock, and the idea of a timeline is an abomination.

Hence Bergson's framing of time as duration: for Bergson the essence of experiential time is that our consciousness is always experiencing the latest moment sliding smoothly into the next. Time, he says, cannot be spatialization and counted as can space. Bergson railed against the idea of time being extrapolated to just another metric dimension like the 3 dimensions of space. The spatial dimensions, to him, were static, fixed, dead. It is only duration that gives our existential experience of the lived experience that we know. Spatialization, to Bergson, was a dirty word; it was the spatialization of our lived experiences that rendered industrial life dead, static, mechanistic and uninteresting. Bergson was railing against the idea of a physics that could predict everything, a popular thought in the early 20th century.

After WWII, Bergson was largely forgotten until Deleuze & Guatarri ressurected him. Deleuze in particular was an enormous fan of Bergson and promoted his ideas heavily.

But what was revolutionary about Deleuze's handling of Bergson was his incorporation of post-war complexity/chaos theory and quantum mechanics to recover space as a dynamic and mutable. Influenced by such concepts as Reimman mana olds and fractal theory, Deleuze recognized that space wasn't a static and mechanistic concept at all, but instead, like Bergson's duration, can give rise to all types of unpredictable behaviors, experiences, and mathematics.

Rather than focus on one concept of "space" - the abstracted Euclidean grid - they classified space in two broad classes, the smooth and striated. Smooth spaces are spaces that are analogous to Bergson's duration: the experienced space of the journey, nomadic spaces, spaces that unfold rather than increment, that are uncountable and unexpected. Striated spaces are the class of spaces Bergson focused on exclusively: coordinate spaces, the countable spaces of the Euclidean grid and the map, or that of the timeline.

Essentially, D&G 'recovered' mathematical space as an exciting and unpredictable philosophical concept. All spaces arise from continuously recapitualtion of smoothing and striation, and counting spaces always give rise to the uncountable and to emergent behavior.

A good example is Conway's Game of Life: a simple set of rules played out on a metric space in countable time (striation) gives rise to emergent organizational patterns and a higher level of emergent behavior that simply cannot be predicted or quantified using the original simple set of rules alone (smoothing). Or, to take another case, the Mandlebrot set: a simple pattern gives rise to a recursive, self-similar-yet-never-identical structure that persists to infinity. For D&G it the uncountable always arises out of the act of counting.

This comment is somewhat outside the normal domain of HN, I know, so I hope you will excuse it. I rarely get to show off the hundreds of hours I dumped into D&G and Bergson in gradschool in my day job. :-D


Interesting. The vocabulary disconnect with General Relativity (which is the more relevant theory of relativity here, I think) is pretty frustrating, although one thing that struck me is that at the time Bergson was making these arguments, there was a lot of GR jargon yet to be invented. Also, crucially, a formal process for foliating a "block universe" spacetime was decades off (the 3+1 Arnowitt-Deser-Misner formalism arose in the late 1950s), so a late 1910s criticism of GR as treating the timelike axis as "dead" like the spacelike ones was almost reasonable.

Other important and relevant tools were either extremely fresh (e.g. Noether's first theorem) or had yet to be formalized (e.g. gauge theory), and these put practical limits on conceptual attacks on dynamical spacetimes (that's one reason why externally static vacuum metrics, like Schwarzchild's, were popular at the time). Numerical relativity wasn't even a dream in the 1920s.

However, in spite of not-yet-existing tools, it was pretty clear that General Relativity's coordinate freedom combined with diffeomorphism-invariant models of matter would accomodate standard approaches to time-series evolutions of field content (e.g., initial values surfaces and physical laws). Additionally, "ticking clocks" that appeared in Einstein's and others' GR papers were meant as shorthand for much more general objects -- basically anything that has some state that isn't time-translation-invariant. Ideal gases and other thermodynamic composite "objects" count, as do fundamental particles, as does an entire expanding or contracting universe. "Ticking" is simply the application of some arbitrary coordinates (not necessarily linear or even uniform ones; in GR they only have to admit a diffeomorphism) on those "clocks".

One of the interesting things that was pretty fresh prior to Einstein's Nobel was the resolution of the hole argument, which essentially abandoned manifold substantialism. Spacetime without a clock is simply an irrelevance; it's only the presence of at least one (or more) "ticking clocks" that gives meaning to any system of coordinates one puts down on the manifold -- and in particular it's the "ticking clock" or clocks that generate the metric; it is not something that is a property of wholly empty space, and that in turn led to a deeper understanding of the G_{\mu\nu} + \Lambda g_{\mu\nu} side of the Einstein Field Equations (i.e. the curvature of spacetime determined by the metric).

There was undoubtedly some "philosophy" going on in the early days of General Relativity, but frankly most of the work was on modelling gravitational collapse in general, which was both fairly difficult technically and also a deep well of unexpected consequences that were even more strikingly different from Newtonian gravitation than the Kepler problem in GR.

I'm fairly confident that the ideas raised related to this Bergson-Einstein debate were uninteresting (and possibly even mostly unknown) to most of the scientists exploring the golden age of General Relativity (1960s & 1970s mainly). GR, especially post-Einstein, racked up some extremely precise quantitative predictions of the behaviour of large bodies (and small things near large bodies) that matched later observations with high precision.

By the 1980s, the space for thinking about the philosophy of General Relativity was already mainly at inaccessible energy-densities or at almost pointlessly timelike-separations from us (e.g. the earliest we could see the consequences of black hole evaporation is about a hundred billion years in the future), so what's more interesting (I think) is the study of the mechanisms that generate the metric and the exploration of non-exact solutions, rather than picking at the scabs of GR's unremovable background.


This... was an amazing reply. Thank you for offering it, and taking the time to lay out such a long and thoughtful response. I think you are pretty much dead on, and your reply helped me connect some dots in my own mind about the meta-history of relativity.

Regarding the lack of interest in Bergson during 70s and 80s, I think you are precisely right, and the untestable nature of the time-like ramifications of relativity weren't something I had previously considered. Of course, by that time Einstein was so obviously right, and Bergson so obviously wrong, I think those physicists can be forgiven for not knowing, or for not giving a shit if they did know.

One of Bergson's chief objections to the Twin's Paradox was the idea of time slowing down for the twin sent on the relativistic journey. Such a thing made no sense to him, giving how he framed time: as an unrolling now that could not be subdivided into metric units.

Bergson's objections to time-like relativity are certainly understandable, I think, given the historical context. As you pointed out, the notion of a physics without a background of absolute space - the concept of the ether or an absolute background metric against which space time is measured - were the 'standard' model of the time. I would go even further, and say that many physicists at the time either had severe difficult in coming to terms with physics based on frames of reference, or they rejected it outright. So I don't think Bergson's objections to a relative experience of time are unreasonable, nor do I think you can fault him for his objections, given the difficulties physicists themselves had coming to terms with the implications of relativity. Something I hadn't really considered, however, is that we didn't have the laboratory apparati to test the hypothesis that time passes differently under acceleration until decades after Bergson himself was dead.

Regarding clocks, I certainly understand that a 'clock' in physics is a shorthand for a physical system undergoing periodicity: whether it is an actual clock, a cesium atom, or a gas, etc. For Bergson, however, it was the act of reducing the dimension of time to a countable metric itself that was problematic. For him, the idea that time can be subdivided like space was simply a trick of memory, not actual experience. If we focus only on the unfolding 'now' - something difficult enough to do Bergson wrote whole books on it - we only see one moment elide seamlessly and smoothly into the next.

Bergson had no problem with pointing out that metric time worked quite well in modeling physical systems; his objections were to using this approach to model human experience (particularly with regards to free will and the implications of determinism inherent in relativity). Bergson was a proto-postmodernist, and was trying to get at the idea that the 'map is not the territory.' Hence Bergson's focus on the Twins Paradox. Relativity allows for a space-like time that can be 'run in reverse,' but actual time isn't space-like, in the sense that it can be traversed in one direction only. So despite what Einstein's equations predicted, Bergson objected that the notion of the Twins experiencing time differently was non-sensical.

What I hadn't realized prior to reading your comment is the similarity of Bergson's objections to the objections/difficulties physicists themselves had in abandoning the idea of a fixed, background metric space. He is essentially arguing for a fixed background of indivisible non-metric time that everyone experiences universally and that unrolls at a fixed rate for all observers.

On a side note, I've always thought Bergson (and pretty much the entire history of the philosophy prior to Einstein) had it precisely backward. Thousands of works have focused on and prioritized time as a cornerstone philosophical concept. Bergson was not alone is his obsessive focus on it. And yet, time is the most ephemeral and intangible concept of them all. You can't see it, you can't hold it, there is nothing there. 'Time' as we know it is merely the periodic spatial change repetition of some physical phenomenon: the vibration of an atom; the periodic steps of a watch hand; the filling of a fixed volume of space with water (as in a water clock).

Perhaps it's only the fact that I take living in a post-Einsteinian space-time for granted, but I always found it strange that people -including Bergson - so obsessively abstract 'time' as something distinct from itself, when what they are really seeing is space itself unfolding into... well, more space I suppose.

Thanks again for the thoughts, it was a great read with my morning coffee!


[continuation of too long comment]

"He is essentially arguing for a fixed background of indivisible non-metric time that everyone experiences universally and that unrolls at a fixed rate for all observers."

Right, that pre-Einsteinian picture has proven to be wrong. Accurate clocks at different altitudes and moving at different groundspeeds bear this out, even if people living on mountaintops or flying in jets don't notice the parts per billion difference in their day from the people living at sea level. The GPS tools they have with them do, though.

And, sadly, he did not live long enough to see 1971 ( https://en.wikipedia.org/wiki/Hafele%E2%80%93Keating_experim... ).

Penultimately, there are some theoretical physicists who think time is "real" in the sense that it is fundamental rather than just emergent. I think you are taking an emergentist position (which I agree with) when treating it as arising from observed periodicity. (Remember that your observation of something's period -- like the bouncing light pulse between the parallel mirrors -- is not necessarily the same as another person's observation of the same something.)

Finally, just to bend your brain a bit, in General Relativity in any universe which is even close to being like ours, you cannot have a system where a pair of mirrors with a light pulse bouncing between them can be forever parallel. The parallel mirrors and light pulse are a system of mass-energy that source very slight (but nonzero) curvature. That curvature means that the parallel mirrors, if close to one another, are on a converging path even in empty space far from all other matter. If far from one another, the metric expansion of space means that the parallel mirrors are on diverging path. In a completely empty universe with a finely tuned dark energy, one can set up a classical system in which the system is extremely finely balanced so that the mirrors will stay the same distance apart (measured locally by a notional mass-energy-less observer moving with the mirrors), but real mirrors and light, made out of parts of the Standard Model, will break that fine balance, and the mirrors will move onto either a converging or a diverging path eventually (maybe bet on diverging because of the relative strength of the electromagnetic interactions with the light pulse compared to the gravitational potential energy, and because real mirrors are imperfect reflectors so some photons will "leak away").

On top of that, a really long (approximately "straight-line") Twin Paradox journey in an expanding universe can put a cosmological horizon between the Twins, so they'll never be able to compare their wristwatches in person. Each will see the other slow down and grow dimmer, but only the one moving at near the speed of light (still locally constant everywhere) will live to see her twin disappear completely across the horizon.

(Of course a similar journey confined to the neighbourhood of the Milky Way, e.g., by zipping to and fro many times, will not involve a cosmological horizon.)

"post-Einsteinian space-time"

Well, we call it post-Newtonian. General Relativity's fundamental theory (and in particular the Einstein Field Equations) is very much Einsteinian still. We just understand it better than he did, mainly because we have newer calculational tools (and newer mathematical innovations), and because we have the advantage of access to many thousands of relativists' work over the sixty years or so since his death.

> Thanks again for the thoughts, it was a great read with my morning coffee!

Likewise.


Thanks, I enjoyed your reply to my reply too.

The sequence of discoveries or formalisms weighs heavily on how we teach students; it's not just because earlier formalisms are necessarily easier or more intuitive, although they certainly appear to be when it comes young people who have grown up very close to the surface of the earth when it comes to classical mechanics and Newtonian gravity versus the post-Newtonian extensions.

Certainly lots of physicists took varying amounts of time coming to terms with Special Relativity; few today are au fait with General Relativity. Indeed, even relativists who are will tend to prefer to cast problems as Special Relativity ones, using (or even deliberately abusing) the approximately flat spacetime close by the strict definition of "local", because even when they are comfortable with General Relativity, it is faster to use SR where one can, even in cases where one has to manually put in corrections arising from slight curvature.

In an SR setting one usually teaches Lorentz transformations by trying to impart understanding about three things: firstly, the constancy of the speed of light for all observers in uniform motion everywhere, and secondly, thinking of a "clock" that is a pair of parallel mirrors with a pulse of light continuously bouncing back and forth between them. An observer moving with the parallel mirrors will see the pulse "forever" moving perpendicularly back and forth at the same frequency. An observer in any other uniform motion will see the pulse follow a non-perpendicular path (try it with your thumb and forefinger on one hand held parallel and representing the mirrors, with your index finger on your other hand pretending to be the front of the pulse of light -- hold your hands at a fixed distance in front of your face and watch, then try moving your arms left and right, or towards and away from you, or up and down.). The third thing to understand is that the zig-zagging of your finger between your moving thumb-and-finger appears to be a longer path because it is a longer path (think of a set of coordinates on a wall you see past your hands -- bricks or a wallpaper pattern may help). Moving-with-mirrors twin sums up the length traversed by the pulse of light and arrives at something shorter than not-moving-with-mirrors twin's sum, since the latter sees the pulse travelling along a zigzag between the moving mirrors. Since light always travels at a fixed speed, the longer zig-zagging path must take more time than the shorter always-parallel path. That is, each zig-zag "bounce" takes longer, i.e., the zig-zag bounce frequency is lower, or equivalently, the moving-with-mirrors twin's time is passing more slowly.

Einstein wrote about light bouncing between parallel mirrors, but unfortunately almost always in technical settings. I wonder if that would have helped people like Bergson.

'map is not the territory' -- funnily that's exactly what General Relativity is about; diffeomorphism invariance means that you can have arbitrarily many maps of the same matter, all exactly equivalent, and that you can apply arbitrary coordinates over the configuration of matter.

'Relativity allows for a space-like time that can be 'run in reverse'

Well, sorta. Flat spacetime is time-symmetric; since the symmetry group of flat spacetime is fundamental to the Standard Model, all Standard Model interactions are time-reversible.

BUT... the Hubble volume is extremely curved and in an expanding universe, time-reversibility is far from clear. Indeed, there is a pretty clear thermodynamic arrow-of-time, since the earlier universe, being hotter and denser, had less entropy than the later universe (which has lots of almost wholly empty space, and space with a tiny tiny tiny energy-density can be arranged in all sorts of ways and look the same macroscopically). As the metric expansion of space continues, entropy increase because of all that extra new practically empty space. The empty space can pop up all over the place and in almost any sort of configuration, and we get the same overall picture of the cosmos (in particular everything on Earth looks fundamentally, if not absolutely exactly, the same). Reversing the "movie" of the expanding universe with lots of galactic clusters in it requires very careful positioning of all the "almosts" in the empty space as it disappears, otherwise the overall picture of the cosmos diverges dramatically from our history of it. So at that scale, time-symmetry appears to vanish.

(You could also think of it this way: if you blow up the earth you get a cloud of dust and rocks and stuff. Following Boltzmann's definition of entropy as above, one cloud of dust and rocks and stuff can be pretty indistinguishable from another. But if you reverse the explosion (say, via gravitational collapse), you aren't going to get the dust and rocks coalescing into cities and coral reefs and the Himalayas as we know them unless you are very very precise. So even at that scale, time-symmetry vanishes.)

I am not certain that our brains are actually sensitive to those sorts of time-symmetry violations. Maybe we don't reconstruct the future as well as we reconstruct the past because some part of our brain was lost during the evolutionary periods in which we lost various features found in our common ancestors with birds (e.g., the ability to synthesize vitamin C in our own bodies; tails; nictitaing membranes on our eyes; ...). It'd be interesting to have a conversation with a corvid or a grey parrot or something, or a cetacean. Maybe they have a more symmetrical view of "past" and "future", in that they can remember both. Maybe we are good at playing catch because our brains actually "remember" where the ball will be, rather than doing some sort of calculative prediction.

General Relativity is not quite silent on these points; the theory is a "block world" one in which the whole of spacetime is fully determined. Formalisms that do a 3+1 foliation to look more like pre-Einsteinean physics can produce surprisingly bogus results, even though the "block world" suggests that if we know the entire configuration of the universe at any "slice", we know the configuration of the whole "block" history of the universe. (Why is the subject of a substantial amount of current research).

'actual time isn't space-like, in the sense that it can be traversed in one direction only'

So, above, I said that in an expanding universe, or in the presence of curvature near planetary masses, time-reversal fails, but it fails globally. The individual local interactions within atoms and within molecules are all fully time-symmetric (and we can more-or-less show this in labs).

Again, this is a hot topic in physical cosmology. However, I think everyone agrees that no humans are known to have travelled backwards in time, even if subatomic parts of humans may have (due to e.g. the presence of positrons from radionuclide decays within their bodies, or the uncertainty principle).

[comment too long, so dividing it here]


Images from the front page of reddit are autopromoted to the front page of imgur.

One of the reason I like imgur: it's Reddit without the Reddit UI or redditors.


One of the reasons I like Reddit: It's Imgur without the Imgur UI or "imgurians".


Fair enough. If Reddit is what works for you then rock it. Regarding the imgur community, I personally think they are a little too tryhard and silly, myself.

But I don't go there for the community. All I want are the best of the funny, amusing and interesting images the Internet can generate, served up in a never ending stream. After a long day of reading, coding, emailing, and everything else, the last thing I want to do is navigate yet another wall of grey on white text.

Imgur is pure brain candy in that sense. No thinking, no real discussion, no need to look at comments or interact with the users. Just flip awww, kitty flip hey boobs flip neat!


He says it in the article but it walkways bears repeating: if you aren't the customer, you're the product.


No, even if you are the customer, say at a supermarket, they're still going to take your purchase data and sell it. Supermarkets are especially sneaky, they'll charge you extra if you refuse to be tracked via a "loyalty" card, and then they market that extortion as savings!


Supermarkets will give you as many 'loyalty' cards as you want. And you can put false info on them.

I assume they can still track your purchases by credit card #.


Handy tip: in the US, enter (nnn) 867-5309 (where nnn is the local area code). There's a real good chance someone already has a card assigned to that number.


Actually, BevMo is now being much more brazen about it. They scan your driver's license under the guise of age verification.

I'm several decades past 21. There's no way in hell that I'm an underage drinker. When pressed the last time I was there, I was told that age verification up to the age of 50 is store policy.


Along the same lines --

In bars occasionally young women will come in who will give you free packs of cigarettes in exchange for scanning your ID. I've heard, mind you, this could be completely false, the cigarette companies sell that data to insurance companies.


Ha, I don't use loyalty cards because of the tracking but I never thought of the price difference that way, it makes sense.


No, you are wrong here. All machines get old and wear out; digital machines wear out faster than mechanical ones. The OP you are replying to had a good point - capacitors in particular have a limited lifespan, and electronics are fragile. When the physical lifespan of 99% of the internals is 2-3 years before attrition in whatever it's form claims at least one component, why engineer something that cost 4x more but is only twice as rugged?

You seem to be hung up on Moore's law as well. And while Moore's law has lost some of its teeth in recent years(and tricks like cluster computing and specialized designs are staving off some of its effects), hardware still grows old quickly. (And bitching about your 5 year old $1k hardware is hilarious to those of us who remember paying $5-8k or more for "all the computer you will ever need" only to see it become almost entirely obsolete in less than a year.

But the fact is this: your 5 year old etch-a-sketch only has 256mb ram on which to run its OS and apps. Considering what people expect out of a tablet these days by way of web browsing, multitasking, etc, your iPad 1.0 actually IS obsolete, or quickly will be.

As to open sourcing the design - why in the world would Apple open source the trade secret that literally makes it Apple? The fact that Apple built it and no one else does is literally what makes Apple all of those earnings.


My 5yo netbook is no etchasketch. Without getting into OS debates, this "old" machine boots up and gets my presentation to the projector faster than any off-the-shelf apple/MS machine. 5years is nothing for a digital machine. Capacitors can and do last far longer. They cost pennies, not even, fractions of pennies. If iPads are failing because of such fundamental components then apple has questions to answer.

256mb belongs in the 90s, not 2010. This machine came with 1gb and upgraded a couple years ago with a ssd and 4gb of memory. And anyone who travels will tell you that having an oldschool VGA output is very useful. The average conference-hall projector is far older than the average portable. One less adapter is one less thing to loose.

You don't have to "open source" designs. Look at what car companies do. They maintain a supply of spares and they license out designs they don't want to build themselves. That's what people expect from car companies.

Throwing working machines in the trash simply because they are scratched or fail to keep pace with fashion is wasteful vanity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: