Another QA fail: Latest Win 10 update breaks webcams. 

Widespread issue affecting windows 10 on devices with built in webcams.A change has been made in the most recent build of windows 10 to improve performance when running multiple applications that request access to the same webcam.

Previously each process would stream H.264 from a cam so would each be decompressing the video at the same time, moving forward (once the issues are addressed) the apps should pull a single YUV or NV12 stream from the OS, which does the H.264 decompression for them.

That means one decompression operation is happening during streaming. The particular bug appears to be manifesting when a program isn’t expecting this new behaviour and exits with an error.

Microsoft’s user support forums have lots of chatter but this thread in particular is yielding some practical workaround steps. MS have confirmed a fix is coming.

Skepticism and indifference: Powershell is open sourced.

Another strategy adjustment by microsoft that seems so out of character compared to their precedent of appaulingly vile hostility, but it sure will make managing windows servers easier, VMs and other bulky abstraction layers no longer required. Very excited about what this will mean for SaltStack. Nadella, yet again steering a course away from the delinquent failure spree championed for years by ‘9 year old with ADHD’ fuckwit adult-baby Steve Bullmer.

With that said, I couldn’t have summarised things any better than Maruo Santos who said this on the Arch Linux mailing list:

Given that it comes from Microsoft they must have some agenda to fulfill, I’d rather not touch this even with a 10 foot pole, just like I try to stay away for other MS products as much as I can. The AUR is where it should stay, but even then they can spin it as PR fodder just like canonical spun snappy coming to Arch Linux. (*)

Make no mistake, they are after profits and do whatever it takes to keep the money flowing, all their friendliness to linux and open source is tainted with patent attacks behind the curtain. The next time the leadership changes there is no guarantee that this new found friendliness isn’t going to change.

(*) https://www.happyassassin.net/2016/06/16/on-snappy-and-flatpak-business-as-usual-in-the-canonical-propaganda-department/

The “AS7007 Incident”

http://lists.ucc.gu.uwa.edu.au/pipermail/lore/2006-August/000040.html

The "AS7007 Incident"
Adrian Chadd

It was an average day in 1997. The Internet, fledging compared to today's
standards. Internet operators (mostly!) trusted one another. SMTP servers
would be open relays; a number of open web proxies and anonymous dialout
servers were available. People were worried about running out of IP space.
Network Operators were worried about the CPU on their routers being
taxed dealing with a full routing table of ~45,000 entries.

Then, suddenly, the internet stopped working. Network Operators everywhere
sprang into action to discover the cause of the lack of traffic.
And there it was. As far as the routing protocols were concerned, the
entire internet existed in one location - some crappy Bay Networks
router in AS7007.

The problem was fixed rather quickly - the misbehaving router was pulled
from the network. But this didn't solve the problem. Routers were still
crashing all over the internet. Where were the announcements coming from?
How could one stop it? Was the Internet, kept running by gaffa tape,
IRC and sushi, finally coming to an end?

Everything settled down a few hours later. Network Operators around the
globe began discussing the impact of this outage and how it could be
prevented. The Internet did fundamentally change - but unlike a lot of
other changes, the general users knew nothing about it.

What is BGP? BGP is the protocol which networks on the internet announce
to other networks two things. That they exist, and which networks can be
reached by them, and learning how to reach the other networks on the Internet.
Routers will receive BGP information, decide upon the "best" path to take to
a destination network and update their routing table.

BGP uses a few metrics to determine the "best" path.  The most obvious metric
is the number of networks between them and the destination network - the
"AS Path length". A shorter AS path length is generally better. This isn't
the whole story but as you'll see, it didn't matter.

The other metric is how specific the route is. A more specific route is
preferred over a more general route, regardless of AS path length or any
other metric. So if you see an announcement for 130.95.0.0/16 (ie,
130.95.0.0 -> 130.95.255.255) via path A and an announcement for
130.95.0.0/24 (ie, 130.95.0.0 -> 130.95.0.255) via Path B, traffic destined
to any host inside 130.95.0.0/24 will flow via path B regardless of how
much closer path A is.

So there's this router in AS7007. It learnt the entire internet routing
table via BGP. It began converting most routes into /24s - ie,
routes which covered 256 IP addresses. Somehow, and this part is fuzzy -
it then managed to "leak" this table back into BGP and reannounced to
the entire internet almost every network that was available. Deaggregated
down to /24's. As originating from his AS number.

So the AS path was removed (ie, every network on the internet looked like
it was his) and every announcement was very specific (/24).

So, as far as the routers on the internet was concerned, every network
everywhere could be reached by sending traffic to AS7007.

And, they did. The internet existed at the end of a 45-mbit pipe, connected
to AS7007.

This was rectified quickly. The port was shut off and the announcements
ceased. But the problem didn't go away. Routers kept passing on this
massive 250,000-entry routing table and, in many cases, would then crash.
They'd reboot; re-learn all the routes from a peer, re-distribute them,
and crash again.

Not only that, but routers worked in finite time over links which trasmitted
at a finite data rate with a latency under the speed of light. These
announcements bounced around the internet for hours. Many internet
backbones solved the problem by turning off all their equipment, shutting
off the ports, staging reloads of their equipment, adding route announcement
filters to reject receiving the routes in the first place, and then
turning on their network connectivity.

The aftermath? Network Operators began filtering route announcements
from their peers and customers. At a course level - customers could
only announce networks originating from their AS numbers. At a fine-grained
level - some companies only accepted route announcments matching
certain criteria. This involved first registering your network inside
the RADB - which you would describe your network, the networks you announced
and how you connected to other networks. Most networks did something
in between. Vendors began adding in "magic" into their routers to allow
administrators to control how many announcements a peer could send before
shutting that peer off or ignoring further announcements. The usual talk
of "crytographically signed" data popped up but nothing happened for
a long while.

And the owner of AS7007 was never able to live it down.

Disclaimers:

Much hand-waiving has been done about IP routing here. I could be more specific
but the article would be much, much longer. Email me if you're interested
in a further explanation.

References:

* Someone first noticing what was going on
  http://www.merit.edu/mail.archives/nanog/1997-04/msg00340.html

* What happened
  http://www.merit.edu/mail.archives/nanog/1997-04/msg00444.html

* "Delayed Internet Routing Convergence"
  http://portal.acm.org/citation.cfm?id=347428&dl=ACM&coll=&CFID=15151515&CFTOKEN=6184618

* "Understanding BGP Misconfiguration"
  http://citeseer.ist.psu.edu/mahajan02understanding.html

* "BGP Design Principles"
  http://www.riverstonenet.com/support/bgp/design/index.htm

Folding@Home V7

Who wants wasted cpu cycles? If you’ve paid for silicon, and it’s not warm, you’re wasting it. There’s no excuse if your workloads are peak heavy, the 7th version of Folding@Home will not only spend your unused cycles helping to simulate the trillions of ways proteins fold (and misfold) – the latest release stays firmly true to the Unix method with three separate packages representing the 3 distinct parts of the Folding@Home software. The choice to add a web UI or fancy visual to show exactly how the current workload would appear if wasn’t so ridiculously tiny. Here’s a bootstrap script that will install only the compute part, with no GUI or visuals, inject a ready-made config with my team ID, then run it. Adapt to your needs.

Fedora 23 Onward (DNF instead of Yum)

dnf -y install https://fah.stanford.edu/file-releases/public/release/fahclient/centos-5.3-64bit/v7.4/fahclient-7.4.4-1.x86_64.rpm && curl https://raw.githubusercontent.com/leefuller23/configs-and-that/master/etc/fahclient/config.xml > /etc/fahclient/config.xml && /etc/init.d/FAHClient restart

Debian and derivatives. Have your team ID to hand or roll with defaults to stay anonymous. 

wget https://fah.stanford.edu/file-releases/public/release/fahclient/debian-testing-64bit/v7.4/fahclient_7.4.4_amd64.deb && dpkg -i --force-depends fahclient_7.4.4_amd64.deb

In an event-driven environment like private IaaS you can optimise and consolidate production workloads in real-time, and instead of your server’s sitting idle when there load drops off, and assuming you aren’t running batch jobs or have some other use for idle periods, you can spec FAH at the lowest QoS priority and when there’s nothing else to do, your server farm will contribute with a bit of lovely distributed protein folding simulation.

If your breakpoints are transient (as in, hours not days between peak duty cycles) you can even shutdown the VM and restart it, as the checkpoint feature commits work to disk and will restore it’s probable the work unit can be completed before the expiry date. I applaud.

If you’re cynical, I understand why – but medicine is a long process and we are only about a decade into this idea. Consider that even with medical proof, it takes 10 years simply to get a new drug approved. There have been several very significant breakthroughs, and there will be many more to come. Cancer affects all of us, somehow. if your stack has bare metal, and your costs are fixed, I urge you to consider delegating your overnight idle-time to help the research continue.

If you’re running OpenStack you could use Heat to automate the entire process, filling gaps of idle time with low priority autoscaling groups. Even if you need to terminate VMs to handle uptick, it’s not an issue. Each work unit is delegated to many donors and FAH doesn’t mind if you don’t finish in time. Instead, it adjusts and delegates less intensive WUs to increase the chance of success next time.

If you can control power to your hypervisors it would certainly be wise to migrate VMs onto fewer running boxes to save energy, but a running server deserves a workload, and curing cancer seems like a good way to soak up unspent cycles. Only a truly selfish asshole would disagree.