Today I’m proud to announce that we’ve officially released beta1 of the Arquillian Container for WebSphere Application Server V8.5 Liberty Profile. In this first beta release we’ve integrated quite a few enhancements and are very interested to get feedback from users as to what additional enhancements or fixes you’d like to see in our upcoming 1.0.0 final release.
If you need any other WebSphere container adapter, you’ll still need to get the arquillian-container-was code and build it yourself.
When viewing Google Drive photo albums in Google Plus they seem to be heavily reliant on EXIF tags to determine when a photo was taken, so much so that they’ll use the upload date instead of the file modification date that is safed in Google Drive as the photo creation date. This obviously screws up the order of photo albums when importing old photo albums.
To fix issues with EXIF tags I’ve previously always relied on jhead to fix timezone issues. Turns out though that jhead will not create new EXIF tags, but only edit existing ones; clearly not what I wanted to do.
So I looked around and found exiftool, which is able to create new EXIF tags, and much more.
To inject the file modification date into the file:
exiftool '-CreateDate<FileModifyDate' filename
exiftool '-DateTimeOriginal<CreateDate' filename
To search for files that don’t have a creation date tag:
exiftool -FileName -if '!$createDate' directory
To verify which files have inconsistent EXIF date tags:
exiftool -FileName -if '$createDate != $dateTimeOriginal' directory
If you want to get fancy you might even use something like this:
exiftool '-DateTimeOriginal<CreateDate' \
-if '$CreateDate ne $DateTimeOriginal' directory
Mark Reinhold today announced the general availability of JDK 8 on his blog. As is kind of a tradition by now it took longer than hoped and expected, but the changes, especially the first usable date and time API and removal of the permanent generation in the garbage collector will surely be appreciated by everyone working with Java.
The use of two-factor authentication for personal use, even for online banking, never really caught on.
The only device that was at least slightly promising was the YubiKey, but there wasn’t really much adoption for it as the various operation modes all had their fair share of issues.
- HOTP was supported and YubiCo even had a public authentication service, but no major internet service started to adopt it and encouraged its use.
- TOTP was partially supported as it also needed a little helper-tool on the user’s workstation to supply the current time to the token as there was no built-in clock, which is understandable given it would then also need a battery. The token also only had two key slots, which meant that you could not use one token with all the internet services you were using.
- The static password mode sounded practical at first, until you realized that you’d need to also use a password manager for it to not use the same password with each internet service you use and that the password database would then have to be shared between all of your workstations. While this might be suitable for some, it’s not really a great solution.
Recently Google and a few other companies started working on what they call Universal 2nd Factor (U2F). It is based on hardware tokens, but uses a local key store on the token where each service that you’re authenticating with gets their own key and the authentication itself is handled by a module built into the web browser.
It’s not ideal because it needs support in the browser, which the YubiKey historically didn’t need because it just worked as a USB keyboard entering the one-time tokens into any application, but I think it’s a very interesting approach and I hope it will succeed. If Google will really encourage its users to use these hardware tokens I think there is a realistic chance for this to be adopted by a critical mass of users and might change how we authenticate to internet services and make us all more secure.
According to an article on Wired Facebook is building a farm of Mac Minis because
… Apple insists that all Apple software run on Apple machines, Facebook can’t test its iOS app on the Linux servers that drive the rest of its empire — or on the popular cloud services offered by tech giants like Amazon or Microsoft. So, says Legnitto, the company operates “racks and racks” of Mac Minis that run Apple’s Mac OS X operating system.
The use of Linux refers to Apple’s license restrictions of virtualizing Mac OS X on non-Apple hardware, not to actually compiling or running iOS applications on Linux.
That’s what happens when consumer companies hit the corporate data-center. If this is the future I want to opt-out :)
I’ve heard quite a lot about Node.js in the last few months and now it seems to be getting even more traction with PayPal planning to use it for all of their consumer-facing web applications.
Today I finished reading the Ars Technica Review of Mac OS X 10.9 Mavericks. It’s an extensive review, but it was quite an interesting read. I especially liked the technical sections on the tagging implementation that shows the crazy things going on underneath the covers. The energy-saving features like App Nap and memory compression were also quite interesting and we’ll soon find out whether it was a good idea to show users the applications that are battery hogs that visibly in the UI. The changes regarding Safari, iBooks, and Maps were mostly not that interesting to me, but who knows what features might actually turn out to be useful in the long run. The multi-monitor support sounds quite useful, but that hasn’t been much of an issue for me personally. I’ll not use the iCloud Keychain, but not because it’s not useful; I just don’t like saving passwords in my browser anyway. The price of $0 makes this quite an affordable upgrade, but me still being a VMware Fusion 4 user probably means I have to invest in an upgrade to VMware Fusion 6 as the price of admission; in any case this upgrade looks like a no-brainer as soon as 10.9.1 is out.
It took much longer than it should have, but the wsadminlib team announced that wsadminlib is now available on GitHub.
Let’s all fork and improve it on GitHub so we can finally stop reinventing the wheel at last. When I get a chance I’ll definitely move my wsadmin-scripts over into my own wsadminlib fork.
Update 10/27: Removed link to my old wsadmin-scripts repository as my scripts have now been merged into wsadminlib. This is a great day!
The New York Times published a story earlier this month about how the iPhone came to be and what obstacles had to be overcome.
This is not only an interesting story for anyone working with technology, but it also shows that not everything at Apple always “just works”, at least not for their own engineers :-)
It looks like Steve Jobs was definitely aware of how buggy their first prototypes were that he was demoing, and it turns out Apple is still working with the same technology everyone else is. But we already knew that.
It’s just good that their superior products are based on hard work of engineers and not on fairy dust sprinkled randomly over their iProducts.