POSM, OSM without the Internet

2016-08-09 1 comment

Disclaimer: I am in no way affiliated with the POSM or its development.  I’m just an OSM contributor who thought this was neat and wanted to share the love.

For a while I’ve been envisioning some sort of system that would allow map data to be collected over a large area and then committed and later shared without an Internet connection.  Going into a rural area without sufficient or existing Internet connectivity would surely be a problem with using tools for compiling and rendering OpenStreetMap (OSM) data.  I had come up with a few solutions that were not unique and seems to have been tried before.

Sneakernet

Yep, just toss your GPS tracks, pictures, and JOSM output onto a USB thumb drive and walk/drive it over to a centralized location, where Internet connectivity is available, for processing.  Sure, it might take a while to collect all the information and take a while longer to redistribute all the information to the people in the field but it works.

Intranet

Okay, being a network geek this is my favorite solution; build your own network!  For the record, I’m not talking about stringing wire from village to village like soldiers did around Europe in WWII.  No, I’m talking about building wireless MANs to connect wired/wireless LANs that may already exist in these villages (or we can build our own!).

Adding our own infrastructure (email, web, and other servers) to the network would provide basic communications between villages with a potential connection to the Internet from a faraway town.

But this is far from fun for a software geek (I’m not one of those).  From here enter the POSM.

POSM

The Portable OpenStreetMap, or POSM, device is a small server that hosts all the tools needed to compile, edit, and publish collected mapping data without Internet connectivity.  The project was discussed at the US State of the Map (2016) and the video is a must-watch.

Of course a POSM could be added to either a Sneakernet or Intranet to allow for distributed data to be collected faster but the POSM, alone, seems to make working with this data much easier in the field.

Back to my thoughts

Honestly, my first thoughts around making a box like this, even before I heard about POSM, was the syncing of data back to the master OSM database.  If you watched the video to the end it appears someone else in the crowd had the same concern.  The answer to this was the use of git to manage conflicts.  To me this is very smart as git was made for this type of use-case (distributed data that needs to be compiled together at a core location).

I do wonder how well POSM would work if you had one in each village with MAN connections between and having the POSMs sync among themselves, sharing the data in near-real time.  This would be beneficial as there would be a backup of the data in the event one of the POSM devices died and could add some redundancy.  Providing connectivity could also aid in communications between sites through IRC or XMPP.

Lots of ideas…  Lots of options…

Security Team Post-FAD Notes

2016-03-16 Leave a comment

On 11 March, some of the Security Team met in Washington, D.C. for a day-long FAD where we discussed several issues.  Zach Oglesby released his notes from the meeting and I’ll be using those to describe my take-away from the meeting.

Security Updates

Right now Fedora has a couple of problems with getting security fixes onto people’s systems quickly.  The first has to do with embargoes.  Because Fedora isn’t part of the trusted network, we don’t get advance notification of vulnerabilities before their embargo is lifted.  This means that when we are notified of an urgent security vulnerability the public also has the information and we’re left scrambling to find the fix (patch, new version, etc) and ship it.  Some other flavors of Linux will have had advance notice and will have these patches or new versions packaged up and ready to ship with the embargo is lifted.

The other problem deals with the Fedora Mirror network.  Because there are many mirrors it could take many hours or days(?) to get a security update out to all Fedora users.  This has to do with the mechanisms involved for keeping the Mirror system efficient; not necessarily fast.  We discussed potential solutions for this problem as well.

Working with embargoes

Our solution to working with embargoes is to create a trusted team, with the appropriate tools, to deal with these issues so we can get a head start on urgent security vulnerabilities.

20160311_104450Right now the infrastructure isn’t in place to be able to handle embargoed vulnerabilities.  Red Hat’s Bugzilla instance is currently designed to hold embargoed information, which is good and we definitely want to leverage that resource, but we need our build system and perhaps even Bodhi to be able to support private builds.  The idea is to have the packages built and ready to ship before the embargo expires.  When the embargo is lifted we would just need to push the big red button and the packages would ship.

It would also be hoped that the fix would have gone through QA before the end of the embargoed period so that we’re fairly sure that we’re not breaking anything (other than the vulnerability) in the process.  For this part, we’re going to need trusted individuals to work on these issues.

Oh, and make no mistake, we aren’t trying to hide information forever.  To maintain transparency that Fedora has been build on all of our tools we use to handle embargoed information should be able to make that information public at the end of the embargo.  Bugzilla tickets would be opened up to the public and builds should be made available as well.

Trust

20160311_121055Trust is a difficult thing to define.  How do you establish trust and how do you penalize a break in that trust?  Simply put, we need a way to do both before we start handling sensitive information.  We might be able to show that we have the systems and the procedures for handling this information but the first time there’s a leak and it comes from inside Fedora it’s quite likely we will lose access to this information and be back to where we are today.

We will be working with legal and Fedora management (FPL, FESCo, and Release Engineering) to devise a plan and determine the best way to involve package maintainers, proven packagers, and QA.

Faster access to Security Fixes

Waiting hours or days for urgent security fixes to become available on mirrors really isn’t acceptable.  Recent critical vulnerabilities have seen exploits in the wild shortly after the vulnerability being made public.  We need to be able to get fixes out to users faster.

Debian uses separate servers to deliver security fixes until those fixes have propagated out to their normal mirrors.  This could work, especially with diff packages being used which are usually much smaller than the full package, but would increase the infrastructure needed to support disseminating packages.

There may also be a way to tweak our existing infrastructure to improve the time it takes to push these urgent packages out.  Either way, the Security Team will be working with Release Engineering to help figure out a solution.

Training

Another big topic we covered is training.  Many people turn to the Security Team to learn.  From a team point of view, we want to make sure our members have a common base of knowledge from which to work.  We decided to launch Apprentice and Journey level certifications to will provide this base of knowledge.

Apprenticeship

20160311_143800.jpgWe worked through what we would want to see in a new member and created the Security Team Apprenticeship.  While not fully complete, we hope to have it ready in the coming months.

We also talked about mentors and what it means to be a mentor.

20160311_145049.jpgWe didn’t put down many hard and fast rules and that’s okay.  We want people to be active and participate.  Hopefully the mentors won’t have a heavy lift and neither will the mentees.

Obligatory GPG Key Signing

Of course, a security FAD can’t end without the obligatory GPG key signing.  This was completed with two new members of the team.

The End

So what I’ve documented represents a day of work.  Thanks to all that participated, even if you only had a chance to pop in, virtually, and lend your opinions or support.  Hopefully we can do this again in a few months and work on new tasks.

Fedora Security Team FAD 2016

2016-03-02 Leave a comment

In a couple of weeks (March 11th) the Fedora Security Team will be meeting in Washington, D.C. to hack on training, security fixes, and other issues.  All Fedora contributors are welcome to stop by if you’re in the area.

All the information is available on the Security Team FAD 2016 wiki page.  Please go there and RSVP!

Securing email to Gmail

2015-12-22 3 comments

I’ve been working on securing my postfix configuration to enforce certificate validation and encryption on some known, higher-volume, or more sensitive connections between SMTP servers (port 25).

On many of the connections I’ve setup for secure transport there have been no problems (assuming proper TLS certificates are used).  Unfortunately Gmail™ has been a problem.  Sometimes it verifies and validates the certificate and other times it doesn’t… for days.

After conferring with Google Security I believe I’ve come up with a solution.  In my tls_policy file I’ve added the following:

gmail.com       secure match=.google.com:google.com ciphers=high protocols=TLSv1.2

So far this is working but I’ll continue to test.

If you run your own SMTP server and wish to maintain a secure connection with Gmail this is an easy way to enforce encryption as well as validate the certificate.  Of course this doesn’t protect the message while it’s being stored on the server or workstation (or on Google’s internal network).  To protect messages at rest (on a server) one should use GPG or S/MIME.  Using both TLS over the network between servers and GPG or S/MIME is beneficial to provide protection of the messages going over the Internet.

Update

This configuration is applicable with the OpenSSL version shipped with CentOS 6/RHEL 6.  Implementing this on CentOS 7/RHEL7 or another flavor of Linux may require a different/better configuration.
The policy has been updated for CentOS 7/RHEL 7 which supports TLSv1.2 on Postfix.  Other services can also be setup similarly:

google.com    secure ciphers=high protocols=TLSv1.2
comcast.net    secure ciphers=high protocols=TLSv1.2
verizon.net    secure ciphers=high protocols=TLSv1.2
hotmail.com    secure ciphers=high protocols=TLSv1.2

RFC: Using video conferencing for GPG key signing events

2015-09-24 3 comments

A thought that I haven’t had a chance to fully consider (so I’m asking the Internet to do that for me)…

I have a geographically-diverse team that uses GPG to provide integrity of their messages.  Usually, a team like this would all huddle together and do a formal key-signing event.  With several large bodies of water separating many of the team members, however, it’s unlikely that we could even make that work.

The alternative I thought of was using a video chat meeting to facilitate the face-to-face gathering and exchange of information.  There are obviously some risks, here, but I wonder if those risks are suitably mitigated through the use of authenticated/encrypted links to the video chat system?  Can anyone point to why this would be a bad idea?

Categories: GnuPG Tags: ,

Encryption you don’t control is not a security feature

2015-09-23 Leave a comment

Catching up on my blog reading, this morning, led me to an article discussing Apple’s iMessage program and, specifically, the encryption it uses and how it’s implemented.  Go ahead and read the article; I’ll wait.

The TL;DR of that article is this: encryption you don’t control is not a security feature.  It’s great that Apple implemented encryption in their messaging software but since the user has no control over the implementation or the keys (especially the key distribution, management, and trust) users shouldn’t expect this type of encryption system to actually protect them.

For Apple, it’s all about UI and making it easy for the user.  In reality, what they’ve done is dumbed down the entire process and forced users to remain ignorant of their own security.  Many users applaud these types of “just make it work and make it pretty” interfaces but at the same time you end up with an uneducated user who doesn’t even realize that their data is at risk.  Honestly, it’s 2015… if you don’t understand information security… well, to quote my friend Larry “when you’re dumb, you suffer”.

Yes, that’s harsh.  But it’s time for people to wake up and take responsibility for their naked pictures or email messages being publicized.  I’m assuming most everyone makes at least a little effort toward physically securing their homes (e.g. locking doors and windows).  Why shouldn’t your data be any less protected?

In comparison, I’ll use Pidgin and OTR as an example of a better way to encrypt messaging systems.  OTR doesn’t use outside mechanisms for handling keys, it clearly displays whether or not a message is simply encrypted (untrusted) or whether you’ve verified the key, and it’s simple to use.

One thing I’ll say about Apple’s iMessage is that it at least starts to fix the problem.  I’d rather have ciphertext being sent across the network than plaintext.  Users just need to understand what the risks are and evaluate whether they are okay with those risks or not.

Fedora Security Team 90-Day Challenge to clean up vulnerabilities… an update

2015-05-29 Leave a comment

At the beginning of April, the Fedora Security Team (FST) started on a journey to close all critical and important CVEs in Fedora and EPEL that had originated in 2014 and before.  Now that we’re two-thirds the way through I figured it would be a good time to see what we’ve accomplished so far.

Of the 38 CVEs (37 important and 1 critical) we originally identified: 14 have been closed, 1 is currently on QA, and 23 remain open.  The 14 closed CVEs represent around a third of all the identified CVEs.  So, not bad but also not great; there is still work to be done.

If you want to help get some of these CVEs cleaned up here’s a list of the target packages.  We need to make sure that upstream has fixed the problem and that the packagers are pushing these fixes into the repos.

  • ytnef
  • mediatomb
  • rubygem-httparty
  • rubygem-extlib
  • rubygem-crack
  • nagios
  • libmicrohttpd
  • directfb
  • nagios-plugins
  • dcmtk
  • sahana
  • opensaml-java
  • s3ql
  • tomcat
  • openstack-keystone
  • phpMemcachedAdmin

I hope to come back to you at the end of the month with a report on how all of the CVEs were fixed and who helped fix them!

Follow

Get every new post delivered to your Inbox.

Join 287 other followers