As far as I can tell, in order to be "forced" on a user, AMP must rely on javascript, the browser used or maybe the OS (I trust they are not rewriting search results to point to AMP but that could be another one).
A no javascript command line tcp client will retrieve the page without automatically following the amphtml link. Users thus have a choice. And if choosing the amphtml link it is easy to filter out everything but the text of the page (the content). In that sense AMP is quite nice.
The "forced" nature of AMP should make users think about these points of control for advertisers and Google: javascript, browser, OS. Maybe website owners will think about them too the next time they "recommend" or "require" certain browsers. Web should be javascript, browser and OS neutral.
As Jtsummers says, once you're talking about small programs working to form a system, they're not independent small programs anymore -- they're modules in a larger system. My point is, if you design the modules well, you don't usually have to think about the internal structure of each module when you're trying to think about the behavior of the entire system. Such design is exactly software engineering, in my view.
> Should "ability to program" be measured by how large a program one can write?
Yes, absolutely: that's a very important measure of programming ability. As I think Fred Brooks pointed out in The Mythical Man-Month, each order of magnitude in program scale requires new techniques and a new way of thinking.
I wouldn't say it's the only measure of programming ability; there is a place for finely crafted small modules. But on any project that's setting out to build a substantial piece of software, you need at least a certain number of people who know how to do that (then they can teach the rest). If nobody knows how to do it, the project is likely to fail.
> if you design the modules well, you don't usually have to think about the internal structure of each module when you're trying to think about the behavior of the entire system. Such design is exactly software engineering, in my view.
I've been trying to find the right way to phrase this, as it's more of an inkling but touches on my current academic (self-study) interests but also is something I've observed over the past decade or so as a professional developer and tester in the industry.
The engineering component of software engineering needs to study systems engineering. There's a great deal of overlap. My conjecture is that we move from "programming" to "software engineering" when we are more interested in the connections between components rather than the components themselves. Much like how systems engineers aren't doing the work of designing the new car engine, but are overseeing that work and the work for the production line itself and coordinating with the glass manufacturer for the new windows. They care about how different parts of the final system (at the level of the car, or the level of the whole assembly process) will function between each other. Systems engineering (as a discipline) has the tools to handle this sort of division of labor and abstraction and responsibility.
And it's not really that we don't care about internal behaviors. But that's not the primary concern of the overall system engineer. It's the component's engineer who's primarily responsible for that, while the system engineer needs to focus on how that portion fits into the overall architecture.
No program is small, as it must include the operating system.
Now - you could go down to machine code, or work on embedded systems, etc etc - but the point remains. It's not just the software you are writing that's part your program, it's also all the other, existing software (and physical designs) it makes use of, or interacts with.
Edit:
Might be called "software systems engineering", or maybe "dev ops"?
That's not really accurate. I can write a small program that does not include the OS, but rather assumes it. Just as a vehicle designer doesn't have to include the gas or electric distribution infrastructure in their design, they can assume it.
If I'm conforming to someone else's interface (say the C standard library), that's equivalent to an electrical engineer assuming various standard interfaces for the equipment they're designing. It's a design consideration, but not a design task. So the project itself may still remain quite small.
When I'm designing the interfaces that others are using, or designing the interfaces between 100 [0] smaller programs so they can interact effectively with each other and their environment, then it's a larger engineering task.
to predict the future, some look to the past for ideas.
others monitor the present e.g. github commits for popular projects - reject anything that has no recent activity.
the context is per packet encryption, the space and time needed to do it.
this is not the approach taken by tls where one compromised packet can compromise the entire "encrypted stream". nor is the approach taken by dnssec where instead of encrypting some third party gives their blessing to (signs) the data being communicated.
elliptic curve crypto is not new. but encrypting each and every packet on the internet separately is "new" (or at least "different" from current practice).
so it is the scripting language used by ansible, chef, puppet that imposes the required "discipline"?
methinks each of these must in fact call the shell to get things done or least system(3).
if they are calling execve then i would be more interested.
i would also be interested in ansible, chef, puppet if i knew the scripting languages they are written in or wanted to learn them. but other languages interest me more.
as a hobbyist, what i would like to see is a hosting provider that will boot fs images (including bootloader) that user builds on users own computer. i.e., provider gives exact specs of machine and network details. user builds and sends fs image to provider and provider boots from it on some bare metal in a datacenter.
no "host" or "guest". no virtual server. no provider software. if something software-related does not work, it is user's responsibility because it is all user's software. user can send an updated fs image.
anyone know if this exists? the service wanted is barebones: a computer in a datacenter that has an internet connection and someone to boot it from user fs image. cost not an issue.
Ansible, Chef and Puppet are not languages. They are systems for managing the state of your server, during the whole lifetime of the server. You typically tell them "I want X to be in Y state" and they will take whatever actions to make that happen. Writing all that logic in a shell script is not particularly trivial. Of course, it's not all perfect and sometimes you have to resort to scripting-like approach, but the idea is that for the common patterns, they already have the logic implemented.
> You typically tell them "I want X to be in Y state" and they will take whatever actions to make that happen.
I had so many problems with this, because not all possibilities are tested on all distributions on all architectures with all libc's and compiler combinations. Much is implicit compared to Nix where only the kernel is implicit.
Why would you be running on dozens of distros on different architectures?
With some sane level of standardization (lets say less than 6 combinations) it's easy enough to test all and fix up the discrepencies.
The more declaritive you are, and the less you assume about the environment, the better this will work.
Ie. if you need a package, always install it. On some distros it may be there by default, but you'll run into issues if it's not. The good thing about being declaritive is that there's no harm in checking.
> as a hobbyist, what i would like to see is a hosting provider that will boot fs images (including bootloader) that user builds on users own computer. i.e., provider gives exact specs of machine and network details. user builds and sends fs image to provider and provider boots from it on some bare metal in a datacenter.
This is actually somewhat possible with linode. It's not easy to do but it's possible.
Much of the time, virtualization is just a way to give you control over the whole machine. If virtualization is thin, you don't lose a lot of performance.
It shouldn't be hard to make a Xen-backed VM administration interface. Then you can upload your MirageOS images (that you have tested on your local Linux, because MirageOS does that too) and it would work.
You can't get rid of virtualization and still have it remotely configurable. How are you going to reset configuration of a machine if you destroyed the bootloader of the ring-0 operating system?
do not want to have it remotely configurable. nor want anyone to be able to "reset configuration" remotely.
some configuration is fixed, static. to change it one needs to change the image. this is intentional. hobbyist user wants this.
bootloader is on removable media user sends to provider. user is not using any provider software. user is paying for dedicated server and internet connection. that is all.
do not use linux, mirageos. do not need to. have rump. user can make own xen guest kernels and xen host kernels. anyway, this is tangent, irrelevant. user does not need virtualization necessarily and in any case not virtualization provided by third party.
simple requirements: want dedicated bare metal server. want hardware and internet connection. do not need/want provider software. will pay more for this.
Used to do this working at a colo back in the 90s; boot the image then walk away. If someone borked their machine they'd email or call and we'd roll the power for them.
but he had assets there and his business required access to a us-based store.
one story said he met his wife in the us. if she was a us citizen that could be an addtional reason he might want to be able to travel to the us.
in other words, he had reasons to care about potentially having an outstanding judgment against him in the us.
or maybe the appeals court date was before he changed the name to pirate joes?
to be fair, i am making an assumption when i say he picked th wrong name (original name was transilvania trading): that he did not want to be sued in the us. but maybe he did. does not sound like he had a large legal budget though.
there is also the possibility that the name of the business and the other things he did to mimick trader joes had nothing to do with courts reasoning. the fact that he sold goods with a us trademark on the label was enough.
in that case maybe the name matters little.
but if that is true, then shouldn't we see some changes in the risk profile for all sellers of grey market goods even when the names of their stores bear no resemblance to any us trademark associated with goods they sell.
all of the grey market stores i have seen have names that do not mimick any trademarks held by the manufacturers of the goods they sell. but maybe i have not seen enough of them.
my opinion is he picked the wrong name. what consumer would think "transilvania trading" was an authorized reseller of trader joes? otoh, "pirate joes"? but what do i know? not much.
companies that make the bios were* the other sources of "ultimate trust". why not let them...
*then came uefi.
hoop-jumping never a problem with the i.t. market. more complexity is fine so long as managed by someone else. only the sales pitch needs to be simple.
why is it unfathomable that users could only trust themselves and other users? continual push toward more complexity helps keep users from ever believing this is achievable.
The very neat part about firmForth is that if you compile the firmForth JIT with cparser (libfirm compiler), it can inline C functions into the JITted code.
i have seen hn commenters praise apple for "taking a stand on privacy". but how can anyone believe that when they collect so much personal data about the people who purchase their hardware? the old apple did not do this.
1. collecting data on users for months and years after purchase, 2. storing it electronically on remote computers, 3. some connected to the internet. yes, this surely points to a company is concerned about user privacy.
if something goes wrong can you sue apple?
we should expect every hardware vendor from laptop mfrs to the rpi foundation to be silently collecting data from their customers long after the merchandise is purchased. they need to do this, because...
wtf?
1. collecting data on consumers and 2. storing it online.
#1 is incompatible with a pro-consumer stance on user privacy.
#2 is a guarantee that others besides the company are going to get that data, whether the consumer is told about the breach or not.
Reminds me of this: http://blackhat.com/media/bh-usa-97/blackhat-eetimes.html
As far as I can tell, in order to be "forced" on a user, AMP must rely on javascript, the browser used or maybe the OS (I trust they are not rewriting search results to point to AMP but that could be another one).
A no javascript command line tcp client will retrieve the page without automatically following the amphtml link. Users thus have a choice. And if choosing the amphtml link it is easy to filter out everything but the text of the page (the content). In that sense AMP is quite nice.
The "forced" nature of AMP should make users think about these points of control for advertisers and Google: javascript, browser, OS. Maybe website owners will think about them too the next time they "recommend" or "require" certain browsers. Web should be javascript, browser and OS neutral.