Archive

Archive for the ‘Security’ Category

256 Bits of Security

2014-04-17 1 comment

This is an incomplete discussion of SSL/TLS authentication and encryption.  This post only goes into RSA and does not discuss DHE, PFS, elliptical, or other mechanisms.

In a previous post I created an 15,360-bit RSA key and timed how long it took to create the key.  Some may have thought that was some sort of stunt to check processor speed.  I mean, who needs an RSA key of such strength?  Well, it turns out that if you actually need 256 bits of security then you’ll actually need an RSA key of this size.

According to NIST (SP 800-57, Part 1, Rev 3), to achieve 256 bits of security you need an RSA key of at least 15,360 bits to protect the symmetric 256-bit cipher that’s being used to secure the communications (SSL/TLS).  So what does the new industry-standard RSA key size of 2048 bits buy you?  According to the same document that 2048-bit key buys you 112 bits of security.  Increasing the bit strength to 3072 will bring you up to the 128 bits that most people expect to be the minimum protection.  And this is assuming that the certificate and the certificate chain are all signed using a SHA-2 algorithm (SHA-1 only gets you 80 bits of security when used for digital signatures and hashes).

So what does this mean for those websites running AES-256 or CAMELLIA-256 ciphers?  They are likely wasting processor cycles and not adding to the overall security of the circuit.  I’ll make two examples of TLS implementations in the wild.

First, we’ll look at wordpress.com.  This website is protected using a 2048-bit RSA certificate, signed using SHA256, and using AES-128 cipher.  This represents 112 bits of security because of the limitation of the 2048-bit key.  The certificate is properly chained back to the GoDaddy CA which has a root and intermediate certificates that are all 2048 bits and signed using SHA-256.  Even though there is a reduced security when using the 2048-bit key, it’s likely more efficient to use the AES-128 cipher than any other due to chip accelerations that are typically found in computers now days.

Next we’ll look at one of my domains: christensenplace.us.  This website is protected using a 2048-bit RSA certifcate, signed using SHA-1, and using CAMELLIA-256 cipher.  This represents 80 bits of security due to the limitation of the SHA-1 signature used on the certificate and the CA and intermediate certificates from AddTrust and COMODO CA.  My hosting company uses both the RC4 cipher and the CAMELLIA-256 cipher.  In this case the CAMELLIA-256 cipher is a waste of processor since the certificates used aren’t nearly strong enough to support such encryption.  I block RC4 in my browser as RC4 is no longer recommended to protect anything.  I’m not really sure exactly how much security you’ll get from using RC4 but I suspect it’s less than SHA-1.

So what to do?  Well, if system administrators are concerned with performance then using a 128-bit cipher (like AES-128) is a good idea.  For those that are concerned with security, using a 3072-bit RSA key (at a minimum) will give you 128 bits of security.  If you feel you need more bits of security than 128 then generating a solid, large RSA key is the first step.  Deciding how many bits of security you need all depends on how long you want the information to be secure.  But that’s a post for another day.

caff gpg.conf file settings

2014-04-01 1 comment

After years of using caff for my PGP key-signing needs I finally come across the answer to a question I’ve had since the beginning.  I document it here so that I may keep my sanity next time I go searching for the information.

My question was “how do you make a specific certification in a signature?”.  As defined in RFC 1991, section 6.2.1, the four types of certifications are:

     <10> - public key packet and user ID packet, generic certification
          ("I think this key was created by this user, but I won't say
          how sure I am")
     <11> - public key packet and user ID packet, persona certification
          ("This key was created by someone who has told me that he is
          this user") (#)
     <12> - public key packet and user ID packet, casual certification
          ("This key was created by someone who I believe, after casual
          verification, to be this user")  (#)
     <13> - public key packet and user ID packet, positive certification
          ("This key was created by someone who I believe, after
          heavy-duty identification such as picture ID, to be this
          user")  (#)

Generally speaking, the default settings in caff only provide the first level “generic” certification. Tonight I found information specific to ~/.caff/gnupghome/gpg.conf. This file can contain, as far as I know, can contain three lines:

personal-digest-preferences SHA256
cert-digest-algo SHA256
default-cert-level 2

I can’t find any official information on this file as the man pages are a little slim on details.  That said, if you use caff you should definitely create this file and populate it with the above at a minimum with the exception of the default-cert-level.  The default-cert-level should be whatever you feel comfortable setting this as.  My default is “2″ for key signing parties (after I’ve inspected an “official” identification card and/or passport).  The other two settings are important as they provide assurances of using a decent SHA-2 hash instead of the default

Configuring offlineimap to validate SSL/TLS certificates

2014-01-30 Leave a comment

I recently upgrade to Fedora 20 and quickly found my offlineimap instance failing.  I was getting all kinds of errors regarding the certificate not being authenticated.  Concerned wasn’t really the word I’d use to describe my feelings around the subject.  Turns out, the version of offlineimap in Fedora 20 (I won’t speculate as to earlier versions) requires a certificate fingerprint validation or a CA validation if SSL=yes is in the configuration file (.offlineimaprc).  I was able to remedy the situation by putting sslcacertfile = /etc/ssl/certs/ca-bundle.crt in the config file.

I won’t speculate as to the functionality in earlier versions but checking to make sure the SSL certificate is valid is quite important (MITM).  If you run across a similar problem just follow the instructions above and all should, once again, be right with the world.

Categories: Fedora 20, Integrity, Security Tags: , ,

Kicking RC4 out the door

2013-11-13 2 comments

I’ve been arguing with my web hosting company about their use of RC4.  Like many enterprise networks they aren’t consistent across all their servers with respect to available ciphers and such.  It appears that all customer servers support TLS_RSA_WITH_CAMELLIA_256_CBC_SHA and TLS_RSA_WITH_CAMELLIA_128_CBC_SHA, in addition to TLS_RSA_WITH_RC4_128_SHA (although the latter is preferred over the other two) but their backend controlling web servers only support RC4.  This is a problem if you are handling crypto (keys) (and other settings) over a weak encryption path to better secure your web service as you have essentially failed due to using the weak encryption to begin with.

So what’s wrong with RC4?

It’s been known for a while (years!) that RC4 is not a good encryption cipher.  It’s broken and there are several attacks that are available.  So why is it being used so frequently?  In a word: BEAST.  RC4 was the only stream cipher available that can combat BEAST and so it became the standard for all TLS connections.  It’s not clear which attack vector is worse: BEAST or the weak RC4.

In recent months most Internet browsers have implemented the workaround n/n-1 to fix the BEAST vulnerability.  With the fix in place it should, once again, be safe to use block ciphers and, thus, get better encryption ciphers (better protection).  There have been many people and organizations talking about the need to get rid of RC4 now since it is a bigger threat to web security.  Yesterday Microsoft released a security bulletin discussing the problem and urged all developers to stop using RC4.  (Oh yeah, and they also want to stop using SHA-1 as well.)  I usually think of Microsoft as trailing in the security field (lets face it, their products aren’t known for being secure ever since that whole network thing happened) so when they say that this mess with RC4 must stop it’s gotten to a point where we should have already done so.

So what are we waiting for?

I think, simply, we’re waiting for TLSv1.1 and TLSv1.2 to become mainstream.  It’s not as if these technologies have just popped up on our radar screens, however, (they’ve been out since April 2006 and August 2008, respectfully) but there has been slow adoption of the two flavors of TLS.  According to Microsoft, their products are ready for TLSv1.1 and TLSv1.2 (both IIS on and IE 11+).  Firefox supports up to TLSv1.2 in 25.0 but you have to manually turn it on (it’s for testing) and OpenSSL (used for Apache) should support TLSv1.2 in its 1.0.1e release.  It’s time to start pushing these better encryption mechanisms into operation… now.

Updates

What Google has to say on the subject.

How secure are those SSL and SSH keys anyway?

2013-10-30 Leave a comment

Thought I’d pass along this research study, The keys to the kingdom, as I found it to be quite interesting (especially when you scan the entire Internet for your data).  If you don’t understand the math explanation at the beginning just continue reading as you don’t need to have a degree in math and science to understand what’s going on.

Why Android SSL was downgraded from AES256-SHA to RC4-MD5 in late 2010

2013-10-14 1 comment

Just ran across this article discussing how horrible the cipher preference list is in Android.  That’s a lot of bad crypto on the streets right now.

Why Android SSL was downgraded from AES256-SHA to RC4-MD5 in late 2010

 

Trusting Trusted CAs

2013-10-09 Leave a comment

Like it or not, the basis of trust for much of the Internet is based on Certificate Authorities (CA).  Companies like Verisign, GoDaddy, and GeoTrust are in the trust business.  They will sell you cryptographic proof of your Internet assets (namely your domain name) that others can use to verify that when they visit your website that they are actually visiting your website and not some lookalike website.  This is important as you don’t want to give your login credentials to your bank account to a lookalike web page that really isn’t your bank.

The trouble is, how do you know the CAs are doing their due diligence and not just issuing certificates to people who just claim to own a particular domain name?  Well, I’m not sure we do know, as users.  Mozilla, like other web browsers, has a policy for including CAs in their browser product but a quick look at the list of CAs that are already in Firefox shows that we as users probably can’t go behind and verify them all.

If I were a conspiracy theorist I would be looking real hard at what the Electronic Freedom Foundation (EFF) recently released about the NSA spying program.  According to their research (and that of the Guardian and others) the NSA is actively performing man-in-the-middle attacks (MITM) to get malware into computers.  This malware allows the NSA (and anyone else capable of accessing these infected computers) to circumvent protections put in place to keep information passed over the Internet secure.  To do these MITM attacks one would need to provide users with a valid SSL certificate if they happen to be visiting a site that is supposed to be secured.  The only way of doing this is to either obtain the SSL certificates from the real sites or to create their own and have them trusted by a trusted CA.  With that in mind, I wonder which option is more probable?

It’s good to note that these types of attacks are not solely done by the NSA.  Gaining access to computers is a very profitable business and one that people other than governments can do.  It’s important to protect yourself against these attacks and be smart when surfing the Internet.  The end of the EFF story contains information on how to protect your computer (and yourself) and is a good read for everyone.

Reflections on Trusting Trust

2013-09-18 1 comment

Reflections on Trusting Trust

This is an old paper written back in 1984 by Ken Thompson.  Mr. Thompson describes why it is so difficult to trust software even when you have access to the source code.  We are now reading daily about how the NSA has access to our network communications and even our computers.  If they have access you can believe that others, completely unrelated to the NSA, have access as well through many of the same software bugs or network connections.  It will be difficult to figure out how to get past these problems.  Fortunately we do have smart people thinking about these things daily.

Categories: Fedora Project, Integrity, Security Tags:

Fedora still vulnerable to the BEAST

2013-09-12 5 comments

This morning I was greeted with a blog post from the fine folks over at Qualys on how BEAST isn’t really still a threat (unless you are using an Apple product).  BEAST, a vulnerability found in SSL and TLS 1.0, was discovered around this time a couple of years ago and put web users in a precarious position of using a poor cipher choice (RC4) or be vulnerable.  Not to worry, however, as developers were able to come up with a solution to the problem (n/n-1).

So I mentioned the Qualys article in my $dayjob IRC channel where my always awake coworker provided information that Fedora is, in fact, still vulnerable to the attack.  Thanks to a problem with pidgin-sipe connecting to a Microsoft server, the n/n-1 split was backed out of the NSS software leaving anything that depends on it potentially vulnerable (Chrome, Firefox, and Thunderbird to name a few).

There is a fix, although it’s not fantastic by any stretch of the imagination.  By simply adding these two lines to your /usr/bin/firefox file the vulnerability should be fixed:

NSS_SSL_CBC_RANDOM_IV=1
export NSS_SSL_CBC_RANDOM_IV

We added these two lines at line 36 and restarted Firefox.  My way-too-awake coworker did a test and confirmed that it was working in his environment.  Your mileage may vary.

Hopefully the fix for BEAST can be reapplied to NSS in Fedora soon as leaving users exposed can be dangerous.

Thanks to Hubert Kario for pointing me, and walking me, though this stuff before my morning coffee.

Update: 2013-09-12 @ 14:30 UTC

Apparently this problem will be persistent according to the NSS package maintainer.  From the ticket:

I bit of information from the nss side of things. The nss disabling patch is not applied on Rawhide or f20, onlt applied on stable branches. After we branch Rawhide for the next fedora release and we enter in Alpha, I send emails to the fedora development mailing list telling them that NSS_SSL_CBC_RANDOM_IV=1 will be the default as they use updates-testing and ask for feedback on whether it causes problems. Twice they have said it still causes problems. There are still unpatches servers out there. Once we go beta I have to enable the patch again. f20 is entering Alpha soon so I’ll send that email again. I know this bug is for Firefox but I though worth informing you that we monitor this every six months for nss.

Update: 2013-10-10 @ 15:22 UTC

After several weeks of inaction I’ve filed a ticket with FESCo to hopefully compel NSS to be remedied and any software that breaks with this fix should be patched to undo the fix.

Update: 2013-10-17 @ 10:32 UTC

I believe this problem has been fixed (finally!) for Fedora 19 and beyond.

MTA certificate not verifying in Fedora 19

2013-09-06 Leave a comment

Since upgrading to Fedora 19 I’ve been working out the kinks.  Today I was finally able to run one of my problems down and fix it.  It involved the failure of my MTA to deliver mail due to a TLS failure.

This failure was working against both postfix and ssmtp.  After much log searching I was able to determine that ssmtp wasn’t verifying the public certificate of the distance SMTP server against the CA certificates I have on my system.  I was able to confirm that the problem existed on other Fedora 19 systems and that it wasn’t just my crazy setup.  After working with a couple of developers it seems that the ssmtp configuration file now requires the entry “TLS_CA_File=/etc/pki/tls/certs/ca-bundle.crt” to function correctly.  It is not currently known what changes were made that created this problem.

I have not troubleshot postfix as of yet but I suspect a similar solution will be needed.

Follow

Get every new post delivered to your Inbox.

Join 202 other followers