I recently upgrade to Fedora 20 and quickly found my offlineimap instance failing. I was getting all kinds of errors regarding the certificate not being authenticated. Concerned wasn’t really the word I’d use to describe my feelings around the subject. Turns out, the version of offlineimap in Fedora 20 (I won’t speculate as to earlier versions) requires a certificate fingerprint validation or a CA validation if
SSL=yes is in the configuration file (.offlineimaprc). I was able to remedy the situation by putting
sslcacertfile = /etc/ssl/certs/ca-bundle.crt in the config file.
I won’t speculate as to the functionality in earlier versions but checking to make sure the SSL certificate is valid is quite important (MITM). If you run across a similar problem just follow the instructions above and all should, once again, be right with the world.
I’ve been arguing with my web hosting company about their use of RC4. Like many enterprise networks they aren’t consistent across all their servers with respect to available ciphers and such. It appears that all customer servers support TLS_RSA_WITH_CAMELLIA_256_CBC_SHA and TLS_RSA_WITH_CAMELLIA_128_CBC_SHA, in addition to TLS_RSA_WITH_RC4_128_SHA (although the latter is preferred over the other two) but their backend controlling web servers only support RC4. This is a problem if you are handling crypto (keys) (and other settings) over a weak encryption path to better secure your web service as you have essentially failed due to using the weak encryption to begin with.
So what’s wrong with RC4?
It’s been known for a while (years!) that RC4 is not a good encryption cipher. It’s broken and there are several attacks that are available. So why is it being used so frequently? In a word: BEAST. RC4 was the only stream cipher available that can combat BEAST and so it became the standard for all TLS connections. It’s not clear which attack vector is worse: BEAST or the weak RC4.
In recent months most Internet browsers have implemented the workaround n/n-1 to fix the BEAST vulnerability. With the fix in place it should, once again, be safe to use block ciphers and, thus, get better encryption ciphers (better protection). There have been many people and organizations talking about the need to get rid of RC4 now since it is a bigger threat to web security. Yesterday Microsoft released a security bulletin discussing the problem and urged all developers to stop using RC4. (Oh yeah, and they also want to stop using SHA-1 as well.) I usually think of Microsoft as trailing in the security field (lets face it, their products aren’t known for being secure ever since that whole network thing happened) so when they say that this mess with RC4 must stop it’s gotten to a point where we should have already done so.
So what are we waiting for?
I think, simply, we’re waiting for TLSv1.1 and TLSv1.2 to become mainstream. It’s not as if these technologies have just popped up on our radar screens, however, (they’ve been out since April 2006 and August 2008, respectfully) but there has been slow adoption of the two flavors of TLS. According to Microsoft, their products are ready for TLSv1.1 and TLSv1.2 (both IIS on and IE 11+). Firefox supports up to TLSv1.2 in 25.0 but you have to manually turn it on (it’s for testing) and OpenSSL (used for Apache) should support TLSv1.2 in its 1.0.1e release. It’s time to start pushing these better encryption mechanisms into operation… now.
Thought I’d pass along this research study, The keys to the kingdom, as I found it to be quite interesting (especially when you scan the entire Internet for your data). If you don’t understand the math explanation at the beginning just continue reading as you don’t need to have a degree in math and science to understand what’s going on.
This morning I was greeted with a blog post from the fine folks over at Qualys on how BEAST isn’t really still a threat (unless you are using an Apple product). BEAST, a vulnerability found in SSL and TLS 1.0, was discovered around this time a couple of years ago and put web users in a precarious position of using a poor cipher choice (RC4) or be vulnerable. Not to worry, however, as developers were able to come up with a solution to the problem (n/n-1).
So I mentioned the Qualys article in my $dayjob IRC channel where my always awake coworker provided information that Fedora is, in fact, still vulnerable to the attack. Thanks to a problem with pidgin-sipe connecting to a Microsoft server, the n/n-1 split was backed out of the NSS software leaving anything that depends on it potentially vulnerable (Chrome, Firefox, and Thunderbird to name a few).
There is a fix, although it’s not fantastic by any stretch of the imagination. By simply adding these two lines to your /usr/bin/firefox file the vulnerability should be fixed:
We added these two lines at line 36 and restarted Firefox. My way-too-awake coworker did a test and confirmed that it was working in his environment. Your mileage may vary.
Hopefully the fix for BEAST can be reapplied to NSS in Fedora soon as leaving users exposed can be dangerous.
Thanks to Hubert Kario for pointing me, and walking me, though this stuff before my morning coffee.
Update: 2013-09-12 @ 14:30 UTC
Apparently this problem will be persistent according to the NSS package maintainer. From the ticket:
I bit of information from the nss side of things. The nss disabling patch is not applied on Rawhide or f20, onlt applied on stable branches. After we branch Rawhide for the next fedora release and we enter in Alpha, I send emails to the fedora development mailing list telling them that NSS_SSL_CBC_RANDOM_IV=1 will be the default as they use updates-testing and ask for feedback on whether it causes problems. Twice they have said it still causes problems. There are still unpatches servers out there. Once we go beta I have to enable the patch again. f20 is entering Alpha soon so I’ll send that email again. I know this bug is for Firefox but I though worth informing you that we monitor this every six months for nss.
Update: 2013-10-10 @ 15:22 UTC
Update: 2013-10-17 @ 10:32 UTC
I believe this problem has been fixed (finally!) for Fedora 19 and beyond.
Since upgrading to Fedora 19 I’ve been working out the kinks. Today I was finally able to run one of my problems down and fix it. It involved the failure of my MTA to deliver mail due to a TLS failure.
This failure was working against both postfix and ssmtp. After much log searching I was able to determine that ssmtp wasn’t verifying the public certificate of the distance SMTP server against the CA certificates I have on my system. I was able to confirm that the problem existed on other Fedora 19 systems and that it wasn’t just my crazy setup. After working with a couple of developers it seems that the ssmtp configuration file now requires the entry “TLS_CA_File=/etc/pki/tls/certs/ca-bundle.crt” to function correctly. It is not currently known what changes were made that created this problem.
I have not troubleshot postfix as of yet but I suspect a similar solution will be needed.
I went to sign an outgoing message tonight and my Gemalto USB Shell Token wouldn’t light up when I plugged it into my USB port. After doing the typical troubleshooting I am left with the thought that the device has given up the ghost. This isn’t a big problem because I saved the card the SIM came out of and I’m able to use my token on my personal laptop with a traditional card reader. My work computer, however, does not have one of these fancy card readers (maybe I can find my USB one somewhere?).
I can buy another Gemalto device but I’m wondering if there is a better device to use? I mean, the Gemalto lasted almost 500 signatures. But I guess I can now take advantage of the failure to try something else. Does anyone have any suggestions?