I recently read an article on the TriLUG blog mirror discussing access to data after the death of the owner. I’ve also given this a lot of thought as well and had previously come to the same conclusion as the original author of the article has:
“I created a file called “deathnote.txt” which I then encrypted using GPG. This will encrypt the file so that both Bob and Alice can read it (and I can too). I then sent it to several friends unrelated to them with instructions that, upon my death (but not before), please send this file to Bob and Alice.”
To be honest, I didn’t actually go through with this project as there were just too many variables that I hadn’t figured out. There is a lot of trust involved in this that potentially requires a very small number of people (2) to really hose things up. It’s not that I wouldn’t trust my “trusted friends” with the responsibility but it potentially makes them targets and two is just a really low threshold for an adversary to recover this information.
What really threw me was that the author also included a copy of his private key in case they couldn’t locate it on his computer to, I’m assuming here, access other data. I have one word for this: NOPE!
Okay, short of the private key thing, what was proposed was quite logical. Like I said above, I had a very similar idea a while back. Springboarding from that idea, I’d like to propose another layer of security into this whole process.
Splitting up the data
So you have your encrypted blob of information that goes to person A when you kick off but you don’t want person A to have it before. Import some trusted friends and you have a means of providing the information to person A upon your demise. But letting a single person, or even two people, control this information is dangerous. What if you could split up that data into further encrypted parts and handed those parts out to several friends? Then, not one single person would hold all the information. You’d likely want some overlap so that you wouldn’t need ALL the friends to present the information (maybe it got lost, maybe the friend got hit by the same bus that you did, etc) so we’d want to build in a little redundancy.
Shamir’s Secret Sharing Scheme (ssss) is a neat piece of software that takes some information, encrypts it, and then break it into pieces. Redundancy can be added so that not all parts are required to reassemble the data (think RAID 5).
“In cryptography, a secret sharing scheme is a method for distributing a secret amongst a group of participants, each of which is allocated a share of the secret. The secret can only be reconstructed when the shares are combined together; individual shares are of no use on their own.”
–From the SSSS website
Implementing the solution
Because ssss can only share relatively small strings (less than 1024 bits), my “death” instructions would likely need to be stored whole as a cipher text and the key (symmetric) being the shared object.
The other piece of this solution would be whom to get to hold the shared bits of keys. It would likely be best if the individuals were not only trusted but also didn’t know the others involved in the share. That way there is a smaller chance that these individuals could get together to put the key back together.
Also, if person A is the one holding the cipher text, even if the individuals did find each other they would only have a key and not be able to decode the actual texts.
I’m quite happy that I read the original article and I hope to do the same thing that the author did before I kick the bucket. I’m quite sure that there are other ways to do what Tarus and I wrote about and actual implementation will vary depending upon the individual, their technical level, and their personal privacy requirements. This problem, though, is one that deserves to be solved as more and more of our information is kept digitally.
While working on strengthening my email server I discovered that Mozilla Thunderbird 45.4.0 only supports the following ciphers:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA
Not sure why it doesn’t just use the default from OpenSSL.
Update: As pointed out here and other locations, Thunderbird doesn’t use OpenSSL for its encryption library, rather it uses NSS. This is a good example of why one shouldn’t write articles while sick.
Disclaimer: I am in no way affiliated with the POSM or its development. I’m just an OSM contributor who thought this was neat and wanted to share the love.
For a while I’ve been envisioning some sort of system that would allow map data to be collected over a large area and then committed and later shared without an Internet connection. Going into a rural area without sufficient or existing Internet connectivity would surely be a problem with using tools for compiling and rendering OpenStreetMap (OSM) data. I had come up with a few solutions that were not unique and seems to have been tried before.
Yep, just toss your GPS tracks, pictures, and JOSM output onto a USB thumb drive and walk/drive it over to a centralized location, where Internet connectivity is available, for processing. Sure, it might take a while to collect all the information and take a while longer to redistribute all the information to the people in the field but it works.
Okay, being a network geek this is my favorite solution; build your own network! For the record, I’m not talking about stringing wire from village to village like soldiers did around Europe in WWII. No, I’m talking about building wireless MANs to connect wired/wireless LANs that may already exist in these villages (or we can build our own!).
Adding our own infrastructure (email, web, and other servers) to the network would provide basic communications between villages with a potential connection to the Internet from a faraway town.
But this is far from fun for a software geek (I’m not one of those). From here enter the POSM.
The Portable OpenStreetMap, or POSM, device is a small server that hosts all the tools needed to compile, edit, and publish collected mapping data without Internet connectivity. The project was discussed at the US State of the Map (2016) and the video is a must-watch.
Of course a POSM could be added to either a Sneakernet or Intranet to allow for distributed data to be collected faster but the POSM, alone, seems to make working with this data much easier in the field.
Back to my thoughts
Honestly, my first thoughts around making a box like this, even before I heard about POSM, was the syncing of data back to the master OSM database. If you watched the video to the end it appears someone else in the crowd had the same concern. The answer to this was the use of git to manage conflicts. To me this is very smart as git was made for this type of use-case (distributed data that needs to be compiled together at a core location).
I do wonder how well POSM would work if you had one in each village with MAN connections between and having the POSMs sync among themselves, sharing the data in near-real time. This would be beneficial as there would be a backup of the data in the event one of the POSM devices died and could add some redundancy. Providing connectivity could also aid in communications between sites through IRC or XMPP.
Lots of ideas… Lots of options…
On 11 March, some of the Security Team met in Washington, D.C. for a day-long FAD where we discussed several issues. Zach Oglesby released his notes from the meeting and I’ll be using those to describe my take-away from the meeting.
Right now Fedora has a couple of problems with getting security fixes onto people’s systems quickly. The first has to do with embargoes. Because Fedora isn’t part of the trusted network, we don’t get advance notification of vulnerabilities before their embargo is lifted. This means that when we are notified of an urgent security vulnerability the public also has the information and we’re left scrambling to find the fix (patch, new version, etc) and ship it. Some other flavors of Linux will have had advance notice and will have these patches or new versions packaged up and ready to ship with the embargo is lifted.
The other problem deals with the Fedora Mirror network. Because there are many mirrors it could take many hours or days(?) to get a security update out to all Fedora users. This has to do with the mechanisms involved for keeping the Mirror system efficient; not necessarily fast. We discussed potential solutions for this problem as well.
Working with embargoes
Our solution to working with embargoes is to create a trusted team, with the appropriate tools, to deal with these issues so we can get a head start on urgent security vulnerabilities.
Right now the infrastructure isn’t in place to be able to handle embargoed vulnerabilities. Red Hat’s Bugzilla instance is currently designed to hold embargoed information, which is good and we definitely want to leverage that resource, but we need our build system and perhaps even Bodhi to be able to support private builds. The idea is to have the packages built and ready to ship before the embargo expires. When the embargo is lifted we would just need to push the big red button and the packages would ship.
It would also be hoped that the fix would have gone through QA before the end of the embargoed period so that we’re fairly sure that we’re not breaking anything (other than the vulnerability) in the process. For this part, we’re going to need trusted individuals to work on these issues.
Oh, and make no mistake, we aren’t trying to hide information forever. To maintain transparency that Fedora has been build on all of our tools we use to handle embargoed information should be able to make that information public at the end of the embargo. Bugzilla tickets would be opened up to the public and builds should be made available as well.
Trust is a difficult thing to define. How do you establish trust and how do you penalize a break in that trust? Simply put, we need a way to do both before we start handling sensitive information. We might be able to show that we have the systems and the procedures for handling this information but the first time there’s a leak and it comes from inside Fedora it’s quite likely we will lose access to this information and be back to where we are today.
We will be working with legal and Fedora management (FPL, FESCo, and Release Engineering) to devise a plan and determine the best way to involve package maintainers, proven packagers, and QA.
Faster access to Security Fixes
Waiting hours or days for urgent security fixes to become available on mirrors really isn’t acceptable. Recent critical vulnerabilities have seen exploits in the wild shortly after the vulnerability being made public. We need to be able to get fixes out to users faster.
Debian uses separate servers to deliver security fixes until those fixes have propagated out to their normal mirrors. This could work, especially with diff packages being used which are usually much smaller than the full package, but would increase the infrastructure needed to support disseminating packages.
There may also be a way to tweak our existing infrastructure to improve the time it takes to push these urgent packages out. Either way, the Security Team will be working with Release Engineering to help figure out a solution.
Another big topic we covered is training. Many people turn to the Security Team to learn. From a team point of view, we want to make sure our members have a common base of knowledge from which to work. We decided to launch Apprentice and Journey level certifications to will provide this base of knowledge.
We worked through what we would want to see in a new member and created the Security Team Apprenticeship. While not fully complete, we hope to have it ready in the coming months.
We also talked about mentors and what it means to be a mentor.
We didn’t put down many hard and fast rules and that’s okay. We want people to be active and participate. Hopefully the mentors won’t have a heavy lift and neither will the mentees.
Obligatory GPG Key Signing
Of course, a security FAD can’t end without the obligatory GPG key signing. This was completed with two new members of the team.
So what I’ve documented represents a day of work. Thanks to all that participated, even if you only had a chance to pop in, virtually, and lend your opinions or support. Hopefully we can do this again in a few months and work on new tasks.
In a couple of weeks (March 11th) the Fedora Security Team will be meeting in Washington, D.C. to hack on training, security fixes, and other issues. All Fedora contributors are welcome to stop by if you’re in the area.
All the information is available on the Security Team FAD 2016 wiki page. Please go there and RSVP!
I’ve been working on securing my postfix configuration to enforce certificate validation and encryption on some known, higher-volume, or more sensitive connections between SMTP servers (port 25).
On many of the connections I’ve setup for secure transport there have been no problems (assuming proper TLS certificates are used). Unfortunately Gmail™ has been a problem. Sometimes it verifies and validates the certificate and other times it doesn’t… for days.
After conferring with Google Security I believe I’ve come up with a solution. In my tls_policy file I’ve added the following:
gmail.com secure match=.google.com:google.com ciphers=high protocols=TLSv1.2
So far this is working
but I’ll continue to test.
If you run your own SMTP server and wish to maintain a secure connection with Gmail this is an easy way to enforce encryption as well as validate the certificate. Of course this doesn’t protect the message while it’s being stored on the server or workstation (or on Google’s internal network). To protect messages at rest (on a server) one should use GPG or S/MIME. Using both TLS over the network between servers and GPG or S/MIME is beneficial to provide protection of the messages going over the Internet.
This configuration is applicable with the OpenSSL version shipped with CentOS 6/RHEL 6. Implementing this on CentOS 7/RHEL7 or another flavor of Linux may require a different/better configuration.
The policy has been updated for CentOS 7/RHEL 7 which supports TLSv1.2 on Postfix. Other services can also be setup similarly:
google.com secure ciphers=high protocols=TLSv1.2 comcast.net secure ciphers=high protocols=TLSv1.2 verizon.net secure ciphers=high protocols=TLSv1.2 hotmail.com secure ciphers=high protocols=TLSv1.2
A thought that I haven’t had a chance to fully consider (so I’m asking the Internet to do that for me)…
I have a geographically-diverse team that uses GPG to provide integrity of their messages. Usually, a team like this would all huddle together and do a formal key-signing event. With several large bodies of water separating many of the team members, however, it’s unlikely that we could even make that work.
The alternative I thought of was using a video chat meeting to facilitate the face-to-face gathering and exchange of information. There are obviously some risks, here, but I wonder if those risks are suitably mitigated through the use of authenticated/encrypted links to the video chat system? Can anyone point to why this would be a bad idea?