The use of two-factor authentication for personal use, even for online banking, never really caught on.
The only device that was at least slightly promising was the YubiKey, but there wasn’t really much adoption for it as the various operation modes all had their fair share of issues.
- HOTP was supported and YubiCo even had a public authentication service, but no major internet service started to adopt it and encouraged its use.
- TOTP was partially supported as it also needed a little helper-tool on the user’s workstation to supply the current time to the token as there was no built-in clock, which is understandable given it would then also need a battery. The token also only had two key slots, which meant that you could not use one token with all the internet services you were using.
- The static password mode sounded practical at first, until you realized that you’d need to also use a password manager for it to not use the same password with each internet service you use and that the password database would then have to be shared between all of your workstations. While this might be suitable for some, it’s not really a great solution.
Recently Google and a few other companies started working on what they call Universal 2nd Factor (U2F). It is based on hardware tokens, but uses a local key store on the token where each service that you’re authenticating with gets their own key and the authentication itself is handled by a module built into the web browser.
It’s not ideal because it needs support in the browser, which the YubiKey historically didn’t need because it just worked as a USB keyboard entering the one-time tokens into any application, but I think it’s a very interesting approach and I hope it will succeed. If Google will really encourage its users to use these hardware tokens I think there is a realistic chance for this to be adopted by a critical mass of users and might change how we authenticate to internet services and make us all more secure.
According to an article on Wired Facebook is building a farm of Mac Minis because
… Apple insists that all Apple software run on Apple machines, Facebook can’t test its iOS app on the Linux servers that drive the rest of its empire — or on the popular cloud services offered by tech giants like Amazon or Microsoft. So, says Legnitto, the company operates “racks and racks” of Mac Minis that run Apple’s Mac OS X operating system.
The use of Linux refers to Apple’s license restrictions of virtualizing Mac OS X on non-Apple hardware, not to actually compiling or running iOS applications on Linux.
That’s what happens when consumer companies hit the corporate data-center. If this is the future I want to opt-out :)
I’ve heard quite a lot about Node.js in the last few months and now it seems to be getting even more traction with PayPal planning to use it for all of their consumer-facing web applications.
Today I finished reading the Ars Technica Review of Mac OS X 10.9 Mavericks. It’s an extensive review, but it was quite an interesting read. I especially liked the technical sections on the tagging implementation that shows the crazy things going on underneath the covers. The energy-saving features like App Nap and memory compression were also quite interesting and we’ll soon find out whether it was a good idea to show users the applications that are battery hogs that visibly in the UI. The changes regarding Safari, iBooks, and Maps were mostly not that interesting to me, but who knows what features might actually turn out to be useful in the long run. The multi-monitor support sounds quite useful, but that hasn’t been much of an issue for me personally. I’ll not use the iCloud Keychain, but not because it’s not useful; I just don’t like saving passwords in my browser anyway. The price of $0 makes this quite an affordable upgrade, but me still being a VMware Fusion 4 user probably means I have to invest in an upgrade to VMware Fusion 6 as the price of admission; in any case this upgrade looks like a no-brainer as soon as 10.9.1 is out.
It took much longer than it should have, but the wsadminlib team announced that wsadminlib is now available on GitHub.
Let’s all fork and improve it on GitHub so we can finally stop reinventing the wheel at last. When I get a chance I’ll definitely move my wsadmin-scripts over into my own wsadminlib fork.
Update 10/27: Removed link to my old wsadmin-scripts repository as my scripts have now been merged into wsadminlib. This is a great day!
The New York Times published a story earlier this month about how the iPhone came to be and what obstacles had to be overcome.
This is not only an interesting story for anyone working with technology, but it also shows that not everything at Apple always “just works”, at least not for their own engineers :-)
It looks like Steve Jobs was definitely aware of how buggy their first prototypes were that he was demoing, and it turns out Apple is still working with the same technology everyone else is. But we already knew that.
It’s just good that their superior products are based on hard work of engineers and not on fairy dust sprinkled randomly over their iProducts.
Have you read - and finished - a book in the last month? I know I haven’t, and this makes Seth’s post on an end of books quite a fascinating read.
The bookstores have been doomed for a long time in countries where Amazon and other online stores have entered the market. Their broad coverage and zero-shipping models are just too convenient and bookstores weren’t able to reinvent themselves in the market they owned.
What resonates with me is that I also don’t see a point in having too many written books around that I have read and pile them up on a bookshelf. The number of books I have read more than once is very low, comparable to movies that I’ve seen more than once.
On the other hand I don’t have an e-reader yet, even at the €69 price point the Kindle is currently sold at. Reading paperbacks is just as convenient and the price point e-books are currently sold at are not really lower than the printed version. What is pretty convincing is that with e-readers you don’t have old books piling up, but you can still access them whenever you need them and your new magazines or newspapers could be delivered to you wirelessly.
What’s holding e-readers back though, is that everything is available on paper, but not everything is available on a specific e-reader, availability even varying by geography, which means that currently an e-reader, in the best case, is only a partial solution anyway.
The end of books will come, but not now.
When the impending shutdown of Google Reader was announced it was interesting to watch what happened. Many services tried to replace it, but only some of them catered to the core audience of Google Reader.
I looked at some of the alternatives early on, but then decided to wait until the shutdown on July 1 to reevaluate my options.
I ended up choosing Digg Reader and I couldn’t be much happier. It really works well for what I use it for and with the update on July 19 they added all the remaining features I was looking for:
- Simple and focused interface
- Unread counts for feeds
- Nice keyboard shortcuts (mostly what I’m used to from Google Reader)
- Show only unread items
- Mark all items as read
- Mark single item as read
Digg Reader’s infrastructure seems to be heavily based on Amazon Web Services and Python and the interface was quite snappy for me since I started using it.
I hope they’ll choose the right model to finance this thing. The Old Reader might be shutting down because they didn’t and we really need a good replacement for Google Reader.
Today I’m proud to announce that we’ve officially released alpha1 of the Arquillian Container for WebSphere Application Server V8.5 Liberty Profile. With this first alpha release the Liberty Profile container support is now available from a public maven repository for the first time.
If you need any other WebSphere container adapter, you’ll still need to get the arquillian-container-was code and build it yourself.