[go: up one dir, main page]

  1. @chloeweil great article and great work implementing ! Interested in your choice to use a database for performance reasons, was that prompted by actual experience or just the cited help thread? fwiw I’m having no performance problems storing >2000 notes in flat files with a CSV file index

  2. conversation summaries:

    Doing acceptance in a different language to your application is philosophically a good idea.

    When syndicating data, the quality can be used to determine the canonical version even if the separate versions don't link to each other (example: photographer takes RAWs, gives TIFF to client, RAW is proof of provenance)

    Not drinking tea and not having cream on cream teas reduces the day to day hassle, stress and confusion of living in Devon by approximately 90%.

  3. I was going to spend this evening working on browser extension, but I think it would be better spent providing a /data export utility for fellow ex users.

    From my initial researches, it looks like /u/username.json is the best bet, as it gives a JSON array of all posts written by username, along with like and comment data. It accepts a max_time=timestamp query param, and a _ query param, the function of which I am not sure of.

    To iterate through all the pages of posts from a certain user, start with their profile URL w/ .json tacked on the end, fetch all the items, get the datetime of the last item, convert that to a timestamp, fetch the same URL with ?max_time=timestamp, repeat until an empty array is returned.

  4. Ben Ward: @BarnabyWalters The one strange thing I see in your replies to @drewm is the lack of threading in client UI. Missing the in_reply_to header?

    benward heh, or not :/ Hopefully this time though. If not I won’t bother you with any more of these tweets :)

  5. Anyone know a good way of integration testing how a web app communicates with twitter? I want to make my testing better so I can test syndication, but I’m not entirely sure how to go about it.