I'm curious: how would the achievement gap change if all or most post-secondary education was delivered online instead of at physical colleges/universities?
I know someone who teaches at a community college and she said that student groups who normally do poorly in regular classes do even worse in online courses. Random paper here seems to agree with that: http://www.ashe.ws/images/Gross%20and%20Kleinman%20-%20Need%...
I don't think that would change much. This isn't so much about how the kids are being taught but rather about whether or not the working-class kids feel like they belong. Also, online classes are arguably worse for kids who are already doing poorly, since they get less opportunity to ask questions in class or get help from the professor and TAs.
Given the mechanism which seems to cause people to drop out (basically, lack of encouragement and/or sense of belonging) you can make arguments both ways.
The completion rate for online courses is very low anyway, so it is likely this difference might be immaterial in the noise of that overall low rate.
The point isn't that she didn't learn to study. If you had read just a little bit farther, you would have seen the real reason: at the first minor setback she met, she began to question whether she really belonged at a prestigious four-year school. That's a pretty understandable reaction given the culture shock she experienced in going to UT. Rich people with educated parents get to feel like they belong at colleges; poor and working class people don't.
Regarding your comment on an "FPGA backend" for GCC, you have to understand that simulating a VHDL design (what is implemented) is a drastically simpler task than synthesizing an FPGA image. Logic optimization, place and route, timing analysis--these are things entirely out of the scope of the GCC project, and the details differ significantly between FPGA vendors and between an individual vendor's products. It just isn't a realistic goal.
Agreed, that it is outside the current scope of gcc. Given that gcc is able to handle a simulation, that would indicate that gcc's intermediate representation is able to capture the semantics of a VHDL netlist? That's where I'm starting from.
Assuming the above, I'm thinking of a project, independent of gcc, that takes gcc's intermediate representation and does all the FPGA specific tasks that you mention. Yes, it would be a huge project, comparable in scope to gcc itself, and even that might be an underestimate. It could start small, to make it realistic, then incrementally expand its scope, just like linux and gcc did. Eventually, the FPGA vendors might have to choose between participation or losing customers? It might be able to exploit some of gcc's backend infrastructure in the FPGA process, but who knows?
> It just isn't a realistic goal.
Or it's a red rag to a bull, to the right person. :-)
I was two blocks away at Stubb's when this was going on, and I walked through that area many times already during SXSW. I very rarely realize how fragile my life is, but events like this make it real.
The big difference from my perspective is that Atom isn't currently FOSS. Other than that, the maturity of the emacs ecosystem is both one of its greatest features and a bit of a curse.
The non-FOSS part is the part that bothers me the most. Maybe bothers is too strong a word. It's just hard for me to get into a text editor without knowing that the community could pick it up and run with it, if the originator dropped it.
Not sure if that's an unfounded fear, but I don't want to invest time customizing a platform and writing plugins when it could close shop in a second. I suppose I could say that about a lot of things I do customize, but it's hard when the FOSS text editor world is so rich.
40 years of legacy decisions can be a huge weight on a project, but reinventing a newer, rounder wheel has pitfalls as well. I'd love to say 'I want a modern version of emacs' (or, in my case, vim), but I'd be terrified that I'd end up with a TextMate2-style second system syndrome.
It looks like the GH guys have worked around this problem quite well, and I'm looking forward to begging, borrowing, and then just begging again if I get a chance to try it out.
Some of the memory ideas are similar--Itanium had some good ideas about "hoisting" loads [1] which I think are more flexible than the Mill's solution. In general, this is a larger departure from existing architectures than Itanium was. Comparing it with Itanium, I doubt it will be successful in the marketplace for these reasons:
-Nobody could write a competitive compiler for Itanium, in large part because it was just different (VLIW-style scheduling is hard). The Mill is stranger still.
-Itanium failed to get a foothold despite a huge marketing effort from the biggest player in the field.
-Right now, everybody's needs are being met by the combination of x86 and ARM (with some POWER, MIPS, and SPARC on the fringes). These are doing well enough right now that very few people are going to want to go through the work to port to a wildly new architecture.
The compiler part seems to be a core part of the mill's strategy: the representation and design seems to be oriented towards making it easy to compile for (the guy who gives the talks is a compiler writer). If the performance gains are half as good as advertised, and porting is not a complete pain (and it seems it won't be too bad), then they will have little difficulty attracting market share, even if only in niche applications at first.
> -Right now, everybody's needs are being met by the
> combination of x86 and ARM (with some POWER, MIPS, and
> SPARC on the fringes). These are doing well enough
> right now that very few people are going to want to go
> through the work to port to a wildly new architecture.
That's not true at all. The biggest high-performance compute is being done on special parallel architectures from Nvidia [1] (Tesla). Intel trying to bring X86 back into the race with its Xeon Phi co-processer boards [2].
> Right now, everybody's needs are being met by the combination of x86 and ARM (with some POWER, MIPS, and SPARC on the fringes).
I'm not sure. I think that a hard port to a new architecture must look a lot more like a worthwhile effort now that the wait-six-months Plan A no longer works, especially for single-threaded workloads. Provided the new architecture can actually deliver the goods, of course.
LLVM intermediate representation and Mill code are going to be pretty different. The LLVM machine model is a register based machine (with an arbitrary number of registers--the backends do the work of register allocation). Basically, an easier RISC-ish assembly.
So, while LLVM would be helpful for porting things to the Mill, as it's largely a "solve once use everywhere" problem, it's still not trivial. It could take a lot of effort to make it competitive.