Migrating to https

Google Maps, Geocaching, and WordPress all prefer HTTPS as default now, Google Maps has even begun reducing functionality in their API unless HTTPS is used by the consuming web site.  Therefore we’ve begun migrating the entire BCaching web site over to HTTPS as well.

The mobile apps are still using the BCaching API via HTTP, so that will remain available, but all browser-based use of the web site will be switched over.

We are using LetsEncrypt for our 100% free auto-renewing SSL certificate. See https://letsencrypt.org/ for more details.

We are currently testing functionality at https://test.bcaching.com. Feel free to use that if you’d like and let us know if you encounter any trouble.

Geocaching API is Live

In case you haven’t seen the forums, bcaching is now using the Geocaching Live! API to automatically download pocket queries from your geocaching account.

In order to activate this feature, you must authorize bcaching to access your Geocaching account. To get started, go to the bcaching profile page, then next to “Geocaching Account”, click authorize. Click Authorize again on the geocaching page, and you will be redirected back to your bcaching profile page where it should indicate that you have authorized your account.

Then, go to the upload pocket queries page. There will be a new button at the top named “Download”. Use this to download all new available pocket queries from your geocaching account. The first time you use it, all available pocket queries will be downloaded and processed, even if they have been uploaded manually before. The next time you use it, only new pocket queries and those that have been created since the last download will be included.

In addition, if you set up your pocket queries to send “pocket query generated” notifications to your bcaching autogpx email address, whenever a notification email is received, it will trigger the downloads automatically.

BCaching move to new server(s)

As I mentioned in the last post, I wanted to retire our old somewhat expensive Windows VPS and migrate to one or more cheaper servers, hopefully without sacrificing any quality of service.

After a couple of weeks of research, experimentation, and testing, I converted the geocache detail data away from individual flat files. Geocache detail includes short and long descriptions, hint, logs, and travelbug inventory. When I originally went to individual files I thought it was a terrible idea, but it DID get the large volume of “less often” used data out of MongoDB which saved other headaches, and it turned out to work reasonably well over the next year or so. That data has now been moved into CouchDB, with up to 25K geocache records per database file, for a total of 168 files at the moment. Each of these database files are between 50 and 150 MB and are now easier to manage than the 1.5 million individual files. Since CouchDB uses an append-only mechanism for adding/modifying data, these files grow with updates leaving unused holes in the file, but there is a nightly job that reclaims the space by compacting files after an appropriate threshold. Compacting a database file involves writing a new copy of the file without the unused data, but this has worked well since the database files are reasonably sized.

The main new server is a Linux VPS (Debian 7) with 50 GB disk, 1.5 GB RAM, and 2 CPUs. That is running CouchDB, MongoDB, MySql, Nginx, and Node, and phpbb forums.

There is also a new temporary “small” Windows VPS (30 GB disk, 768 MB, 2 CPUs) that is running the email processor, bcaching GPX file processor and the main web site.

My plan is to next focus on rewriting the email and GPX processors in python and/or node and move them to the linux server as well. Once that is done, the Asp.Net web site can be moved to a much cheaper shared hosting. The last step would be to rewrite the web site to run under node.

As part of the rewrite, I’m planning to open source all the new code which will be hosted on GitHub. I wonder if anyone might be interested in contributing.

Keep on caching!

BCaching 0.9.2

No major changes.
– bug fix (duplicate entries in recent finds)
– mongodb client updated to latest version
– munzee experimentations are over and dropped from the system (nobody was using the feature and it wasn’t working as well as I’d hoped)

The yearly renewal date is coming up for the main VPS and since the cost is a bit high I’m looking into migrating to a smaller/cheaper environment. Currently we’re running on a Windows 2008 R2 server with 4GB RAM, 80 GB disk, and 2 CPUs. Windows has been my target environment from the beginning but I’ve been using linux here and there over the years and I think it’s time I took advantage of its potential cost savings at least for some of the application components.

Currently the system is made of the following components:
– MongoDB database and filesystem for data storage
– Perl scripts for email
– Python scripts for release management
– Windows Service (C#) for background processing (GPX files, cache quality scoring, GCVote loading)
– ASP.Net (C#) for main web site
– PHPbb + MySql for forums

The data storage, perl scripts, and forums can be moved fairly easily to a linux server that would be about a third of the cost of the current windows server. The windows service and ASP.Net web site still require a Windows server, but since the data storage consists of 90% of the disk requirements, I could migrate to a smaller and cheaper windows VPS instance.

The main bcaching UI has not changed in quite a while, but there are some really nice new technologies available for building modern websites that I’ve been using at my “real job” or experimenting with on the side and when I can find the time I’m hoping to redesign and rebuild.

In the meantime, keep enjoying the site and keep on caching!

BCaching 0.9.1

A minor update to the site today.
– Simplified sign up process and added Geocaching Cacher ID to profile page
– Corrected coordinates were not used in mobile “Nearest” view, not being shown on desktop or mobile maps, or when downloaded as GPX or to Garmin device
– UI tweaks on desktop map for IE
– Send to Garmin device was not honoring the max count

BCaching 0.9

The latest bcaching release 0.9 is now live. There were quite a lot of internal changes but the appearance and behavior of the site should be very similar to before.

Although I did my best to test all aspects of the site to make sure nothing was broken, there were a LOT of changes so there is a higher probably than usual that I missed something. Please be on the lookout for any issues and post details on the forums as soon as possible.

This was the 2nd major overhaul in data organization. The last one was a little over a year ago when we migrated from a relational MySql database to the document-based MongoDB. At the time, many relational concepts were carried over instead of taking full advantage of a document structure.

I also took the opportunity to make some other wish-list changes. One of which was the need for all users to be associated with a geocaching.com cacherid. This was because all user-related data on the site was tied to a cacherid. With the reorganization, user-related data is tied to the internal bcaching userid. In the future, that will allow new users to take advantage of the site without necessarily being geocaching users.

I’m also experimenting with showing Munzee data mixed with geocaches on the desktop and mobile maps. This is a work in progress, but if you want to take a look you can enable this feature on your profile page.

Stay tuned.

Unexpected power outage

There was an unexpected power outage around 2AM EDT and MongoDB did not start up cleanly which left the site down until later this morning.

Free space and repaired database

On Sunday I ordered a new VPS that has more disk and RAM (80GB/4GB vs 40GB/2GB), but less redundancy for a few less dollars. The database server has been migrated over and was reloaded/repaired. It’s also running on the latest version of the software with journaling enabled which should significantly reduce the chance of data corruption in the future.

You may have noticed that GPX file processing is now running behind schedule. It’s not because the pocket queries were backed up, but because the GPX processing is still running on the old machine and the geographical mismatch of the two servers has increased the latency, causing the processing to run a bit slower. GPX processing is database intensive and runs best when it is “near” the database server. Since the database server and GPX processing are now on separate machines, one of which is in Denver and the other in Chicago it’s taking longer than usual. I will be moving the GPX processing to the Database machine in the next day or two so processing speed should improve then.

Update 3/11/2012: Finished migrating web application, autogpx processing, etc. over to the new VPS and processing time is back to normal.

Out of disk space again

Time to upgrade the server again.  I have to apologize to those of you who have been unable to upload new pocket queries. We’ve been busting at the seams of the current system over the past couple months and this morning it hit the fan again.

I’ve been planning to migrate to an alternate “economy” service that has less of the extra features that we don’t use anyway, but provides double the disk space and RAM. I was going to try to wait until the end of April since the current cycle ends at the end of May, but it seems we just can’t wait so I’m ordering the new service today and will move the databases over ASAP.

Alternate fix for IIS6 eurl.axd 404 error

With ASPNET 4.0 there is a breaking change when using IIS6 with a website that has “legacy” ASPNET 2.0 mixed with ASPNET 4.0 applications.

Extensionless URLs that used to be sent to your configured “default content page” such as “Default.aspx” will instead be sent to eurl.axd followed by a hash code. Unfortunately your ASPNET 2.0 application will not automatically handle that.

Microsoft provides 3 workarounds in its ASPNET V4 Breaking Changes document. Two of them are to not mix ASPNET 2.0 and 4.0 applications in the same web site. The third is to disable extensionless URLs altogether.

But what if you DO want to mix ASPNET 2.0 and 4.0 applications in the same website AND use extensionless URLs in your 4.0 application?

Why not just handle the eurl.axd path and redirect it to your default document? This worked for me:

1. Create a class library (i.e. eurlredirect.dll) with a single IHttpHandler class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Web;

namespace eurlredirect
{
    public class RedirectHandler : IHttpHandler
    {
        public bool IsReusable
        {
            get { return true; }
        }

        public void ProcessRequest(HttpContext context)
        {
            context.Response.Redirect("Default.aspx");
        }
    }
}

2. Add the class library to your ASPNET 2.0 bin directory.

3. Configure the handler in the ASPNET 2.0 application’s web.config file by adding the following entry to the <httpHandlers> section:

<add verb=”*” path=”eurl.axd” validate=”false” type=”eurlredirect.RedirectHandler, eurlredirect”/>