Making Raleigh a better place to live, work, and play through technology.

Start Hacking or Learn More…

Latest Project Activity

NC Schools Report Cards Update #1

The NC Schools Report Cards were released and the data is "contained" in SAS. We have "liberated" it and made it available on our Open data Portal at (http://codeforraleigh.opendatasoft.com/). Let's look this over and see if we can do some analysis and visualizations that make the data usuable by yhe public.

Posted on by Chris Mathews

CityGram for Raleigh/Triangle Update #2

Code for Cary has worked with Jim Van Fleet at Code for Charlotte to get the Triangle region added to Citygram. https://www.citygram.org

There are issues properly locating addresses since Citygram was made for a city and not a region. Cary Brigade has been working to see if we can get this resolved.

The first dataset used is Crime incident data from the Raleigh and Durham open data portals and the Cary Brigade open data portal.

Posted on by Ian Henshaw

CityCamp NC Update #3

Our event is shaping up quite nicely. We are looking to have lightning talks on June 11. Confirmed speakers so far are Twyla McDermott (Charlotte Open Data Portal Manager), Carolina Sullivan (Wake County Commissioner), Lauren Parker (Go Triangle — regional transit organization), Lisa Abeyta (AppCityLife), and we've got a few others in the works. Also, on that night, we will have a "Taste of City Camp" where we will walk through what participants can expect during the unconference and we'll have mock pitches via updates from the brigade.

On Friday, June 12, we have our unconference that will be kicked off with a GIS panel and a keynote from Mark Headd.

Then on Saturday, June 13 we will have our hackathon with several things happening in parallel. Our traditional CityCamp competition, Code for Triangle projects (maybe a Code for Greensboro project), and we are looking to partner with Girl DevelopIt to see if they want to run a coding class that day.

We are also looking to partner with IT-Ology to see if they want to run a training course. Lisa Abeyta from AppCityLife is also interested in doing a boot camp for AppCityLife.

Posted on by Jason Hibbets

CityCamp NC Update #2

We are making great progress with CityCamp NC 2015 planning. Our next meet-up is on April 13: http://www.meetup.com/Triangle-Code-for-America/events/221570775/

We have marketing, speaker, and sponsorship committees going. We could use some help with logistics.

Mark Headd, Accela, and formerly Code for America is our keynote speaker. We need help with fundraising and getting more sponsors.

Posted on by Jason Hibbets

NC Connect Update #9

Heroku is trialling new pricing levels for their dynos. Here’s the verbatim text they gave:

Free – Experiment in your own dev or demo app with a web and a worker dyno for free. Sleeps after 1 hr of inactivity. Active up to 12 hours a day. No custom domains. 512 MB RAM.

Hobby – Run a small app 24×7 with the Heroku developer experience for $7/dyno/mo. Custom domains. Run a maximum of one dyno per Procfile entry. 512 MB RAM.

Standard 1X, 2X: Build production apps of any size or complexity. Run multiple dynos per Procfile entry to scale out. App metrics, faster builds and preboot. All Hobby features. 512MB or 1GB RAM. $25 or $50/dyno/mo. ​ Performance – Isolated dynos for your large-scale, high-performance apps. All Standard features. Compose your app with performance and standard dynos. 6GB RAM. $500/dyno/mo.

It appears to be an attempt to cut down the amount of freeloaders using Heroku. Nowadays, especially with faster code and faster computers, a standard 512MB dyno can power websites with tens of thousands of hits per hour. Few web apps need more than 1 web dyno, and worker dynos are often not needed. This means that Heroku would only get paid via add-ons, but most add-ons are provided by third parties.

In this new pricing structure, you can’t pop up a free site and leave it running 24/7—it would cost you $7/month instead. Any real app can no longer live on the free tier, so I would expect the proportion of paying customers to increase under this new pricing scheme. Instead of Heroku being the “obvious” choice for a web app because it’s free, you could instead measure the $7/month cost against alternatives:

Free tier of Google App Engine—as far as I know, GAE’s free tier is still useable for real apps. But, you’re limited to PHP, Java, Go, or Python plus the quirks of App Engine’s platform.

dotCloud—I don’t understand dotCloud as I’ve never used it, but they seem to price things by $4.32/month per 32MB of RAM used. It seems a bit steep to me, but maybe there’s something I’m missing.

AWS—A t2.micro would suit any small app just fine, and it (along with a tiny RDS database) would fall under the AWS free tier for a year. A t2.micro has 1GB of RAM and costs about $9.52/month outside of the free tier.

Other commodity VPS providers—Digital Ocean starts a $5/month for a 512MB VPS. Linode starts at $10/month for a 1GB VPS.

Many hobbyists value their time spent configuring Linux at approximately $0/hour, so you’d have to calculate that cost along with lock-in costs for each alternative. Note that Heroku hasn’t announced what will happen to the old pricing tier, so existing users may be grandfathered in or may be forced to switch.

Posted on by Chris Mathews

NC Connect Update #8

Clarifying a part of the model. There are two approaches to the problem of data updates/curation. One require all participating agencies to to cleanse the data and output in a pre-defined prescribed format(Open Refferal) or two, meet them where they are and get the data in what ever format they are using it now. As long as its, digital(that can include PDF) we can then mold the data into a standard format. The second problem does suffer from the problem that less data will be available at the onset, but the users of the system will see some results more quickly and with less effort on their part.

Hopefully this will show them the usefulness of the system and bring them along naturally and encouraging them to gather and maintain more through data. The first approach will provide richer data earlier, but will suffer from agencies lacking the resources to gather and maintain the data. In other words, they have to do a lot more work up front with the promise that it pay off later.

Thoughts?

Posted on by Chris Mathews