Showing posts with label JSON. Show all posts
Showing posts with label JSON. Show all posts

Tuesday, November 13, 2018

Quick-and-dirty IPC with Python, JSON and pyperclip

By Vasudev Ram



Blue Gene image attribution

Hi, readers,

Some time ago I had written this post.

pyperclip, a cool Python clipboard module


The pyperclip module allows you to programmatically copy/paste text to/from the system clipboard.

Recently, I realized that pyperclip's copy and paste functionality could be used to create a sort of rudimentary IPC (Inter Process Communication) between two Python programs running on the same machine.

So I whipped up a couple of small programs, a sender and a receiver, as a proof of concept of this idea.

Here is the sender, pyperclip_ipc_sender.py:
'''
pyperclip_ipc_sender.py
Purpose: To send JSON data to the clipboard from  
a Python object.
Author: Vasudev Ram
Copyright 2018 Vasudev Ram
Web site: https://vasudevram.github.io
Blog: https://jugad2.blogspot.com
Training: https://jugad2.blogspot.com/p/training.html
Product store: https://gumroad.com/vasudevram
'''

from __future__ import print_function
import pyperclip as ppc
import json
import pprint

def generate_data():
    d = {"North": 1000, "South": 2000, "East": 3000, "West": 4000}
    return d

def send_data(d):
    ppc.copy(json.dumps(d))

def main():
    print("In pyperclip_ipc_sender.py")
    print("Generating data")
    d = generate_data()
    print("data is:")
    pprint.pprint(d)
    print("Copying data to clipboard as JSON")
    send_data(d)

main()
And here is the receiver, pyperclip_ipc_receiver.py:
'''
pyperclip_ipc_receiver.py
Purpose: To receive JSON data from the clipboard into 
a Python object and print it.
Author: Vasudev Ram
Copyright 2018 Vasudev Ram
Web site: https://vasudevram.github.io
Blog: https://jugad2.blogspot.com
Training: https://jugad2.blogspot.com/p/training.html
Product store: https://gumroad.com/vasudevram
'''

from __future__ import print_function
import pyperclip as ppc
import json
import pprint

def receive_data():
    d = json.loads(ppc.paste())
    return d

def main():
    print("In pyperclip_ipc_receiver.py")
    print("Pasting data from clipboard to Python object")
    data = receive_data()
    print("data is:")
    pprint.pprint(data)

main()
First I ran the sender in one command window:
$ python pyperclip_ipc_sender.py
In pyperclip_ipc_sender.py
data is:
{'East': 3000, 'North': 1000, 'South': 2000, 'West': 4000}
Copying data to clipboard as JSON
Then I ran the receiver in another command window:
$ python pyperclip_ipc_receiver.py
In pyperclip_rpc_receiver.py
Pasting data from clipboard to Python object
data is:
{u'East': 3000, u'North': 1000, u'South': 2000, u'West': 4000}
You can see that the receiver has received the same data that was sent by the sender - via the clipboard.

A few points about this technique:

- If you run the receiver without running the sender, or even before running the sender, the receiver will pick up whatever data was last put into the clipboard, either by some other program, or manually by you. For example, if you selected some text in an editor and then pressed Ctrl-C (to copy the selected text to the clipboard), the receiver would get that text (if it was JSON text - see two points below). However, that is not a bug, but a feature :)

- Obviously, this is not meant for production use, due to potential security issues. It's just a toy application as a proof of concept of this idea.

- Since I convert the Python object data to JSON in the sender before copying it to the clipboard with pyperclip, the receiver also expects the data it pastes from the clipboard into a Python object to be of type JSON. So if you instead copy some non-JSON data to the clipboard and then run the receiver, you will get an error. I tried this, and got:

ValueError: No JSON object could be decoded

To handle this gracefully, you can trap the ValueError (and maybe other kinds of exceptions that Python's json library may raise), with a try-except block around the code that pastes the data from the clipboard. You can then either tell the user to try again, or print/log the error and exit, depending on whether the receiver was an interactive or a non-interactive program.

The image at the top of the post is of an IBM Blue Gene supercomputer.

From the Wikipedia article about it:

[ The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. Blue Gene systems have often led the TOP500[1] and Green500[2] rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4] ]

- Enjoy.



- Vasudev Ram - Online Python training and consulting

I conduct online courses on Python programming, Unix / Linux commands and shell scripting and SQL programming and database design, with course material and personal coaching sessions.

The course details and testimonials are here.

Contact me for details of course content, terms and schedule.

Getting a new web site or blog, and want to help preserve the environment at the same time? Check out GreenGeeks.com web hosting.

DPD: Digital Publishing for Ebooks and Downloads.

Learning Linux? Hit the ground running with my vi quickstart tutorial. I wrote it at the request of two Windows system administrator friends who were given additional charge of some Unix systems. They later told me that it helped them to quickly start using vi to edit text files on Unix. Of course, vi/vim is one of the most ubiquitous text editors around, and works on most other common operating systems and on some uncommon ones too, so the knowledge of how to use it will carry over to those systems too.

Check out WP Engine, powerful WordPress hosting.

Sell More Digital Products With SendOwl.

Get a fast web site with A2 Hosting.

Creating or want to create online products for sale? Check out ConvertKit, email marketing for online creators.

Own a piece of history: Legendary American Cookware

Teachable: feature-packed course creation platform, with unlimited video, courses and students.

Managed WordPress hosting with Flywheel.

Posts about: Python * DLang * xtopdf

My ActiveState Code recipes

Follow me on:


Monday, December 7, 2015

Using JSON Schema with Python to validate JSON data

By Vasudev Ram


I got to know about JSON Schema and the jsonschema Python library recently.


JSON Schema is a scheme (pun not intended) or method for checking that input JSON data adheres to a specified schema, roughly similar to what can done for XML data using an XML Schema.

So I thought of writing a small program to try out the jsonschema library. Here it is:
# test_jsonschema_unix.py
# A program to try the jsonschema Python library.
# Uses it to validate some JSON data.
# Follows the Unix convention of writing normal output to the standard 
# output (stdout), and errors to the standard error output (stderr).
# Author: Vasudev Ram
# Copyright 2015 Vasudev Ram

from __future__ import print_function
import sys
import json
import jsonschema
from jsonschema import validate

# Create the schema, as a nested Python dict, 
# specifying the data elements, their names and their types.
schema = {
    "type" : "object",
    "properties" : {
        "price" : {"type" : "number"},
        "name" : {"type" : "string"},
    },
}

print("Testing use of jsonschema for data validation.")
print("Using the following schema:")
print(schema)
print("Pretty-printed schema:")
print(json.dumps(schema, indent=4))

# The data to be validated:
# Two records OK, three records in ERROR.
data = \
[
    { "name": "Apples", "price": 10},
    { "name": "Bananas", "price": 20},
    { "name": "Cherries", "price": "thirty"},
    { "name": 40, "price": 40},
    { "name": 50, "price": "fifty"}
]

print("Raw input data:")
print(data)
print("Pretty-printed input data:")
print(json.dumps(data, indent=4))

print("Validating the input data using jsonschema:")
for idx, item in enumerate(data):
    try:
        validate(item, schema)
        sys.stdout.write("Record #{}: OK\n".format(idx))
    except jsonschema.exceptions.ValidationError as ve:
        sys.stderr.write("Record #{}: ERROR\n".format(idx))
        sys.stderr.write(str(ve) + "\n")
The name of the program is test_jsonschema_unix.py, because, as you can see in the source code, the normal output is sent to sys.stdout (standard output) and the errors are sent to sys.stderr (standard error output), as Unix tools often do. So, to run this with the stdout and stderr redirected to separate files, we can do this:

$ python test_jsonschema_unix.py >out 2>err

(where the filename out is for output and err is for error)

which gives us this for out:
Testing use of jsonschema for data validation.
Using the following schema:
{'type': 'object', 'properties': {'price': {'type': 'number'}, 'name': {'type': 'string'}}}
Pretty-printed schema:
{
    "type": "object", 
    "properties": {
        "price": {
            "type": "number"
        }, 
        "name": {
            "type": "string"
        }
    }
}
Raw input data:
[{'price': 10, 'name': 'Apples'}, {'price': 20, 'name': 'Bananas'}, {'price': 'thirty', 'name': 'Cherries'}, {'price': 40, 'name': 40}, {'price': 'fifty', 'name': 50}]
Pretty-printed input data:
[
    {
        "price": 10, 
        "name": "Apples"
    }, 
    {
        "price": 20, 
        "name": "Bananas"
    }, 
    {
        "price": "thirty", 
        "name": "Cherries"
    }, 
    {
        "price": 40, 
        "name": 40
    }, 
    {
        "price": "fifty", 
        "name": 50
    }
]
Validating the input data using jsonschema:
Record #0: OK
Record #1: OK
and this for err:
Record #2: ERROR
'thirty' is not of type 'number'

Failed validating 'type' in schema['properties']['price']:
    {'type': 'number'}

On instance['price']:
    'thirty'
Record #3: ERROR
40 is not of type 'string'

Failed validating 'type' in schema['properties']['name']:
    {'type': 'string'}

On instance['name']:
    40
Record #4: ERROR
'fifty' is not of type 'number'

Failed validating 'type' in schema['properties']['price']:
    {'type': 'number'}

On instance['price']:
    'fifty'
So we can see that the good records went to out and the bad ones went to err, which means that jsonschema could validate our data.

- Vasudev Ram - Online Python training and programming

Signup to hear about new products and services I create.

Posts about Python  Posts about xtopdf

My ActiveState recipes

Wednesday, December 10, 2014

Convert JSON to PDF with xtopdf

By Vasudev Ram



I added support for JSON as an input format to xtopdf, my Python toolkit for PDF creation.

Here is an example program, JSONtoPDF.py, that shows how to use xtopdf to convert JSON data to PDF:

# JSONToPDF.py

# This program shows how to convert JSON input to PDF output.
# Author: Vasudev Ram - http://www.dancingbison.com
# Copyright 2014 Vasudev Ram - http://www.dancingbison.com
# This program is part of the xtopdf toolkit:
#     https://bitbucket.org/vasudevram/xtopdf

import sys
import json
from PDFWriter import PDFWriter

def error_exit(message):
    sys.stderr.write(message)
    sys.exit(1)

def JSONtoPDF(json_data):
    # Get the data values from the JSON string json_data.
    try:
        data = json.loads(json_data)
        pdf_filename = data['pdf_filename']
        font_name = data['font_name']
        font_size = data['font_size']
        header = data['header']
        footer = data['footer']
        lines = data['lines']
    except Exception as e:
        error_exit("Invalid JSON data: {}".format(e.message))
    # Generate the PDF using the data values.
    try:
        with PDFWriter(pdf_filename) as pw:
            pw.setFont(font_name, font_size)
            pw.setHeader(header)
            pw.setFooter(footer)
            for line in lines:
                pw.writeLine(line)
    except IOError as ioe:
        error_exit("IOError while generating PDF file: {}".format(ioe.message))
    except Exception as e:
        error_exit("Error while generating PDF file: {}".format(e.message))

def testJSONtoPDF():
    fil = open('the-man-in-the-arena.txt')
    lis = fil.readlines()
    data = { \
        'pdf_filename': 'the-man-in-the-arena.pdf', \
        'font_name': 'Courier', \
        'font_size': 12, \
        'header': 'The Man in the Arena', \
        'footer': 'Generated by xtopdf - http://google.com/search?q=xtopdf', \
        'lines': lis \
        }
    json_data = json.dumps(data)
    JSONtoPDF(json_data)
    
def main():
    testJSONtoPDF() 

if __name__ == '__main__':
    main()
In the example program, I used as input, the text of "The Man in the Arena", which is a well-known excerpt of a speech by Theodore Roosevelt, the 26th President of the United States.

Here is a screenshot of the PDF file created by JSONtoPDF.py:


Here is the Wikipedia page about JSON, JavaScript Object Notation.

Here is the Wikipedia page about PDF, the Portable Document Format.
PDF became an ISO standard (ISO 32000) some years ago.

- Vasudev Ram - Dancing Bison Enterprises - Python training and consulting

Signup for email about new Python-related products from me.

Contact Page

Thursday, March 27, 2014

Database to JSON in Python

By Vasudev Ram






I had been doing some work involving JSON recently; while doing that, I got the idea of writing some code to convert database data to JSON. Here's a simple Python program I wrote for that. It can be improved in many ways (*), and there may be many other ways of implementing it, but this program shows the basic approach. The program is simple, but can be useful, since JSON is a useful data interchange format.

See this StackOverflow post for some approaches.

Also, I used an SQLite database in this example, for convenience, since the sqlite3 module comes with the Python standard library, so it's easier for any reader to run this program without having to download and install some other database and its Python driver. But the program can easily be adapted (by someone with basic knowledge of SQL) to other databases that support some form of access via Python. Note: the program makes use of a SQLite-specific feature, so some changes may be required for other databases. For comparison purposes, I print out the data fetched from the database both as a Python object and a JSON string.

(Also see a related post: JSONLint.com, an online JSON validator.)

Here is the program, DBtoJSON.py:
# DBtoJSON.py
# Author: Vasudev Ram - http://www.dancingbison.com
# Copyright 2014 Vasudev Ram
# DBtoJSON.py is a program to DEMOnstrate how to read 
# SQLite database data and convert it to JSON.

import sys
import sqlite3
import json

try:

    conn = sqlite3.connect('example.db')

    # This enables column access by name: row['column_name']
    conn.row_factory = sqlite3.Row

    curs = conn.cursor()

    # Create table.
    curs.execute('''DROP TABLE IF EXISTS stocks''')
    curs.execute('''CREATE TABLE stocks
                 (date text, trans text, symbol text, qty real, price real)''')

    # Insert a few rows of data.
    curs.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.0)")
    curs.execute("INSERT INTO stocks VALUES ('2007-02-06','SELL','ORCL',200,25.1)")
    curs.execute("INSERT INTO stocks VALUES ('2008-03-06','HOLD','IBM',200,45.2)")

    # Commit the inserted rows.
    conn.commit()

    # Now fetch back the inserted data and write it to JSON.
    curs.execute("SELECT * FROM stocks")
    recs = curs.fetchall()

    print "DB data as a list with a dict per DB record:"
    rows = [ dict(rec) for rec in recs ]
    print rows

    print

    print "DB data as a single JSON string:"
    rows_json = json.dumps(rows)
    print rows_json

except Exception, e:
    print "ERROR: Caught exception: " + repr(e)
    raise e
    sys.exit(1)

# EOF
The program is self-contained; you don't even need to set up a database and a table and populate it beforehand; the code does that. You just run:
python DBtoJSON.py
And here is its output:
DB data as a list with a dict per DB record:
[{'date': u'2006-01-05', 'symbol': u'RHAT', 'trans': u'BUY', 'price': 35.0, 'qty
': 100.0}, {'date': u'2007-02-06', 'symbol': u'ORCL', 'trans': u'SELL', 'price':
 25.1, 'qty': 200.0}, {'date': u'2008-03-06', 'symbol': u'IBM', 'trans': u'HOLD'
, 'price': 45.2, 'qty': 200.0}]

DB data as a single JSON string:
[{"date": "2006-01-05", "symbol": "RHAT", "trans": "BUY", "price": 35.0, "qty":
100.0}, {"date": "2007-02-06", "symbol": "ORCL", "trans": "SELL", "price": 25.1,
 "qty": 200.0}, {"date": "2008-03-06", "symbol": "IBM", "trans": "HOLD", "price"
: 45.2, "qty": 200.0}]

(*) And remember, this was a demo :-)
Read other Python posts on my blog.

- Vasudev Ram - Dancing Bison Enterprises

Contact Page

Thursday, March 20, 2014

JSONLint.com, an online JSON validator

By Vasudev Ram



JSON page on Wikipedia.

JSON, as most developers nowadays know, has become useful as a data format both for web client-server communication and for data interchange between different languages, since most popular programming languages have support for it (see the lower part of the JSON home page linked above in this sentence).

While searching for information about some specific aspects of JSON for some Python consulting work, I came across this site:

JSONLint.com

JSONLint.com is an online JSON validator. It is from the Arc90 Lab. (Arc90 is the creator of Readability, a tool that removes the clutter from web pages and makes a clean view for reading now or later on your computer, smartphone, or tablet.)

You paste some JSON data into a text box on the site and then click the Validate button, and it tells you whether the JSON is valid or not.

JSONLint.com is a useful resource for any language with JSON support, including Python.

P.S. Arc90 is being acquired by SFX Entertainment, Inc. (NASDAQ:SFXE).

- Vasudev Ram - Python consulting and training

Contact Page

Thursday, July 11, 2013

kaptan, multi-format configuration file manager for Python


By Vasudev Ram

Seen via Twitter.



kaptan is a Python library to manage configuration files. It looks useful and easy to use. It supports dicts, JSON, YAML and INI and Python :-) file formats.
Here is a simple program to show the use of kaptan:
# test_kaptan.py

import kaptan

config = kaptan.Kaptan()
config.import_config({
    'environment': 'DEV',
    'redis_uri': 'redis://localhost:6379/0',
    'debug': False,
    'pagination': {
        'per_page': 10,
        'limit': 20,
    }
})

print "-" * 50
print "environment:", config.get("environment")
print "redis_uri:", config.get("redis_uri")
print "pagination.limit:", config.get("pagination.limit")
print "-" * 50
print "pagination:", config.get("pagination")
print "-" * 50

Run the above program with:
python test_kaptan.py
Output:
--------------------------------------------------
environment: DEV
redis_uri: redis://localhost:6379/0
pagination.limit: 20
--------------------------------------------------
pagination: {'per_page': 10, 'limit': 20}
--------------------------------------------------

- Vasudev Ram - Dancing Bison Enterprises

Contact me

Thursday, December 13, 2012

REBOL, language that influenced JSON, is now open source

Comments on: R3 Source Code Released!

REBOL is an interesting language. It's free to download, available for both Linux and Windows, and quite small in size (MB).

UPDATE: Carl's comment on building REBOL from source, in the REBOL repo on Github, mentions Android as a platform that REBOL can be built for. Interesting  ...

It can be used at the command line for useful one-liners, in command-line scripts, and even to write GUI programs.

It has built-in support for some common Internet protocols.

And many other features.

I had tried out REBOL  for some time, somewhat soon after it was first released several years ago, and found it fun to use.

REBOL  was created by Carl Sassenrath, who also was the main designer of the Amiga computer and OS, a very advanced PC for its time, including multitasking and advanced multimedia when almost no other computers had it.

Main REBOL site for downloading the language interpreters, documentation, examples:

www.rebol.com

REBOL is now open source:

https://github.com/rebol/r3

Hacker News thread about the open sourcing of REBOL:

http://news.ycombinator.com/item?id=4912963

Has interesting points. More than one commenter pointed out that REBOL was an influence on JSON, which was News (heh) to me:

https://erikeldridge.wordpress.com/2009/07/28/notes-bayjax-meetup-yahoo-sunnyvale-727-crockford-the-json-saga/

- Vasudev Ram
www.dancingbison.com

Friday, October 26, 2012

Akiban, new database, supported by SQLAlchemy


By Vasudev Ram

Akiban is a new type of database, with what may be some interesting features.

Seen via the Planet Python Twitter account, specifically, via this tweet:

It's the author of SQLAlchemy, Michael Bayer, talking about starting to provide SQLAlchemy support for Akiban.

The Akiban team seems accomplished. Akiban, the company, is based in Boston.

P.S. I was interested to see that one of the Akiban team members, Jack Orenstein, was a founder/architect at Archivas, which was later acquired by Hitachi Data Systems. I had read about Archivas some years ago; they created a kind of archival system for large amounts of data, with fast retrieval times. They used Python as one of their development tools. Archivas was also based in Boston.

P.P.S. Jack Orenstein of Akiban (and Archivas), referred to above, made a comment on this post. Check it out below. Pretty interesting info ...

Incidentally, I learnt from his comment that Jack is also the creator of Osh, the Object Shell, which was one of the tools I blogged about in my first post on some ways of doing UNIX-style pipes in Python. And seeing those tools is what inspired me to later create pipe_controller. Talk about serendipity ... :-)


- Vasudev Ram - Dancing Bison Enterprises

Thursday, October 25, 2012

jsonpickle: JSON (de)serialization for complex Python objects


By Vasudev Ram

jsonpickle is a Python library for JSON.

From the site: "jsonpickle is a Python library for serialization and deserialization of complex Python objects to and from JSON. The standard Python libraries for encoding Python into JSON, such as the stdlib’s json, simplejson, and demjson, can only handle Python primitives that have a direct JSON equivalent (e.g. dicts, lists, strings, ints, etc.). jsonpickle builds on top of these libraries and allows more complex data structures to be serialized to JSON. jsonpickle is highly configurable and extendable–allowing the user to choose the JSON backend and add additional backends."

When I tried it with a simple example of my own, jsonpickle seems not to work quite as the docs say. E.g. Pickling an instance of a simple class Foo and then unpickling it to a different variable, results in a dict, not an instance:

>>> type(foo)
type 'instance'
>>> type(foo2)
type 'dict'

, though the dict does have the data that the instance does. May be a usage mistake or an actual bug - checking it out ...

UPDATE: Thomas K (see comments on this post) pointed out that the error could be because I was not using new-style classes, i.e. not inheriting from object in my class Foo. Changing that fixed it:
>>> import jsonpickle
>>> class Foo(object):
...     def __init__(self, bar, baz):
...             self._bar = bar
...             self._baz = baz
...     def show(self):
...             print "self._bar, self._baz =", self._bar, self._baz
...
>>> foo = Foo("a", "b")
>>> foo.show()
self._bar, self._baz = a b
>>> foo_pickled = jsonpickle.encode(foo)
>>> foo_pickled
'{"py/object": "__main__.Foo", "_bar": "a", "_baz": "b"}'
>>> foo2 = jsonpickle.decode(foo_pickled)
>>> foo2.show()
self._bar, self._baz = a b
>>> foo == foo2
False
>>> type(foo) == type(foo2)
True

- Vasudev Ram - Dancing Bison Enterprises

Monday, August 13, 2012

Inferno on Disco, Python MapReduce library / daemon for structured text

By Vasudev Ram


Inferno is an open-source Python MapReduce library. It has (from the site):

[ A query language for large amounts of structured text (CSV, JSON, etc).

A continuous and scheduled MapReduce daemon with an HTTP interface that automatically launches MapReduce jobs to handle a constant stream of incoming data. ]

Overview of Inferno.

This overview page has a nice serial example: starting with a small set of test data, it shows how to query for a certain result, in SQL and then in AWK (both are easy one-liners), but then goes on to show how the achieve the same result using Inferno.

The interesting point is that the Inferno code is also small (a "rule" of ~10 lines, presumably stored in a config file) and a one-line command, but the difference from the SQL and AWK examples is that this runs a Disco MapReduce job to distribute the work across the nodes on a cluster. There is almost nothing in the Inferno code to indicate that this is a distributed computing MapReduce job.

Inferno uses Disco.

Disco is "a distributed computing framework based on the MapReduce paradigm. Disco is open-source; developed by Nokia Research Center to solve real problems in handling massive amounts of data."

Some users of Disco: (Chango, Nokia, Zemanta). Chango staff seem to be the developers of Disco.

- Vasudev Ram - Dancing Bison Enterprises

Tuesday, July 31, 2012

Twython - a Python Twitter library

By Vasudev Ram


Twython is a Python library for Twitter. It is written by Ryan McGgrath.

Saw it from my own recent blog post about Twitter libraries.

Excerpt from the Twython Github site:

[ An up to date, pure Python wrapper for the Twitter API. Supports Twitter's main API, Twitter's search API, and using OAuth with Twitter. ]

I tried it out a little (the search feature), it worked fine. It returns JSON output.

The Twython installer (the usual "python setup.py install" kind) also installs the simplejson Python library, which is required, as well as the requests Python library, which is a more user-friendly HTTP library (billed as "HTTP for Humans) for Python, than the standard httplib one. BTW, another good Python HTTP library is httplib2, which was first developed by Joe Gregorio, IIRC.

- Vasudev Ram - Dancing Bison Enterprises

Thursday, September 15, 2011

Picloud - publish Python function and call it via REST

By Vasudev Ram - dancingbison.com | @vasudevram | jugad2.blogspot.com

This looks interesting. Just saw it on Hacker News, maybe more later after checking it out.

Picloud - publish your Python function and call it via REST:

http://blog.picloud.com/2011/09/14/introducing-function-publishing-via-rest/


They have a simple three-step process for making your Python functions callable across the Net via REST:

1. Define your function (on your own machine)
2. Upload it to PiCloud via a method call on their library which you download (once only).
3. Call it via Python (or via the curl utility for manual testing).

JSON format is supported for the output.

Hacker News thread about it:

http://news.ycombinator.com/item?id=2999247


Update: the PiCloud team consists of former UC Berkeley graduates and the company has backing from Andreessen Horowitz and Kleiner Perkins, as per their site.

Posted via email
- Vasudev Ram @ Dancing Bison