I’m thinking about a humane research workshop/conference model, compatible with mid 21st century climate and health emergencies. How about this:
- Two page papers/extended abstracts solicited via public call, and peer reviewed by at least three people each from a diverse panel.
- Chosen papers are presented as pre-recorded 15-20 minute talks.
- These videos are streamed two at a time, in sessions 12 hours apart, and then rewatchable at any time.
- The first of these session has intro and just one talk. The following sessions have one video from previous session and one new one. The reason being that people can watch all the videos by attending half the sessions, and see half the videos as the first premiere.
- Participants attend one session per 24 hours, at the time that best fits their time zone / sleeping pattern. Basically the workshop operates in two ‘phases’, offset by 12 hours, in communication with each other.
- Those at a timezone compatible with both phases are encouraged to join the one which would otherwise have fewer people.
- There could be six talks over four days.
- Discussion is summarised / minuted as text, and shared between the two phases. Part of the final session is for live responses/discussion between authors.
- Authors submit a final, potentially extended version of their paper, to include responses to other talks, published open access.
- Multiple ‘hubs’ are organised (ideally at least one per continent, inspired by ICMPC/ESCOM) where people can watch and discuss the videos together, perhaps building in-person events around the sessions that may or may not be streamed online.
- Bursaries could then made available for a few early career researchers to travel between hubs for cultural exchange, with support for local touring over ~1 month to make the most of the workshop’s emissions budget.
It’s volunteer responsibility amnesty day every solstice, the next one on the 21st December 2021. This is good timing for me, so until then I’m going to add to this post with some responsibilities I’m giving up.
“I need to put a few things down. I hope other people pick them up and carry this work forward. But even if no one does, I need to stop, or at least pause for a while.”
I’ve picked up quite a lot of community responsibilities over the past couple of decades, and would like to pick up some new ones, but need to put some existing ones down first.
TOPLAP live coding collective
- current responsibility – running the server hosting the wordpress blog (which is in turn maintained by the excellent Luis Navarro Del Angel) and the discourse forum, and (badly) running a discord chat server. Renewing/paying for the toplap.org domain. I’ve been running a TOPLAP rocketchat too. No-one really has responsibility for TOPLAP as an organisation, it’s pretty defuse these days. I helped bring together the TOPLAP transnodal stream in February which was huge and amazing, but I feel we’re a bit lacking in the organisational structure to make it happen again. I co-ran a TOPLAP livecode festival in 2018 but don’t think I’ll have the capacity to do that again. *edit* oh and the toplap social media thingies on twitter, facebook and I think instagram..
- want to keep doing this? No, I won’t have time next year. I don’t mind continuing to provide server space for the web and discourse server but it’d be better if someone else took it on really. That said very happy to support others taking things on, with advice and help.
- next step I’d love for others to take over these responsibilities but I’m not sure how to go about that. Without action I fear TOPLAP will fade away, but maybe that’s not a bad thing if it leaves space for something else? The rocketchat has gone quiet now people have mostly moved to discord and telegram etc, so I’ll shut that down at the end of August 2021 (warning and consulting people about this some months ago). Drop by the forum or drop me an email if you’d like to get involved and pick up some responsibilities!
- current responsibility – overlapping with TOPLAP above, running the algorave website which these days is mostly a gig listing, although I fear quite a lot of algoraves don’t go listed. Renewing/paying for the algorave.org domain. Similar to the TOPLAP transnodal stream there have been worldwide algorave streams celebrating its birthday etc but not for a while, could do with some organisation. Algorave is coming up to its tenth birthday next year and it would be nice to do a distributed event for that. Mostly though algorave is an unproductive brand which people seem quite happy to spread around the world without coordination, which is great. Still, it would be nice to have more communication between the different algorave organisers. I co-organise algoraves in Sheffield when there isn’t a pandemic on. *edit* Also the algorave twitter/facebook/instagram profiles/pages.
- want to keep doing this? Again I don’t mind continuing to provide server space for the web and discourse server but maybe it’d be better if someone else took it on. Generally would like to move to more collective organisation.
- next step I’d like to put some effort into making something happen for algorave’s birthday in March 2022 but then step back and focus on other things. It’d be great if other organisers reached out to each other to keep things moving and maybe working out what to do with algorave.com.
TidalCycles live coding environment for algorithmic pattern
- current responsibility – tidalcycles.org is collectively run via a github repo with raph leading on the documentation which is excellent. Tyler kicked off an ace series of online meetups which have been great and are getting a lot of support.. and Andrea has taken on the Atom plugin and putting loads of work into pushing that forward. So I feel that tidal has a proper life of its own now which is great. I’m still accepting personal donations on the website but will soon switch this over to an opencollective page for a shared donations pool. In terms of Tidal as a free/open source project, I still lead on development, approving and commenting on pull requests, and am currently exploring a current rewrite. Julian Rohrhuber leads on SuperDirt as original and primary author of that part of the project. I also maintain the tidal social media profiles although they aren’t so active, and host the club.tidalcycles.org forum and tidal discord although that could be more organised/collectively run. I was running an online video course although won’t have time to add to that in the foreseeable future – all the materials are now in the creative commons. I also spend a fair bit of time answering questions from beginners up, and have mentored ‘google summer of code’ projects the last two summers.
- want to keep doing this? Generally yes I want to stay involved, although the project needs to continue becoming more organised and generally get better at being welcoming of new users and contributors I think. Tidal itself needs to become more accessible, especially in terms of becoming easier to install.
- next step The recent summer of code project by Martin brings us very close to a binary distribution of Tidal, automatically built on github actions, just needs a last push to get supercollider/superdirt bundled up and we’re away.. Would be great to have some energy from others into this, to get things working and tested on multiple platforms. Passing on primary organisation of the forum and social media profiles would be great too, they could all do with a refresh. More iterations of the tidal club multiday streams would be ace too. Having others lead on moderation of the discord would also be good. I still feel I want to lead on the development side of the core Tidal pattern library, but as others contribute more PRs this could shift naturally. It’d be great if someone could take on organisation of regular or semi-regular tidal ‘innards’ meetings to get people working on different aspects of Tidal to coordinate more, and make the most out of Martin’s summer of code work.
<more to follow in future edits..>
We had some pandemic-related challenges, but Eimear + I had a great time collaborating as part of a residency for IKLECTIK. Here’s a stream of Eimear + I jamming, with Eimear on voice + drum machine, and me live coding using their voice as source material, using TidalCycles+Superdirt with the live looper by Thomas Grund. Later in the video I introduce some Tidal features implemented during the residency.
Here’s the full info about our residency, including our project blog. Hopefully we’ll be able to perform in a live venue soon !
I’m really looking forward to joining JB from Music Hackspace to go through the pre-history, history, present and potential future of Tidal, possibly in that order.. Here’s the youtube live stream, if you click on it you should see the date + time in your local timezone, and click to get a reminder:
More info here:
Mostly a note to self, but maybe this is useful for someone else trying to get hifi audio from jack into zoom using linux mint or similar, so I thought I’d make it a blog post.
Zoom processes voice separately from desktop audio. So to send music and voice separately, while jack audio is running, you have to have two feeds going from jack to pulseaudio.
I already have jack set up to connect to pulse, so desktop audio works as normal. I think this was just a case of installing the `pulseaudio-module-jack` package, and configuring jack to run `pacmd set-default-sink jack_out` after startup.
To add a separate stereo channel out of jack into pulseaudio, I ran
pacmd load-module module-jack-source channels=2
Then the new jack sink appears in qjackctl and I can connect up my music sound source (supercollider) to that.
In zoom I then share a window, with stereo hifi audio switched on. `pavucontrol` is super useful at this point, you can see zoom is listening separately for voice and desktop audio, which appears as
zoom_combine_device. Unfortunately I couldn’t simply connect the
zoom_combine_device to the new jack source, don’t know why. However it’s possible to create a ‘loopback’ device for connecting sources to sinks in pulseaudio. I tried with this:
pacmd load-module module-loopback channels=2
Now I expected to have to more in pavucontrol to connect this up to zoom_combine_device, but somehow it did this automatically. I think I had to connect it to the second jack source but everything else ‘just worked’ somehow. Lucky me.
With a bit of experimentation I can hear that as expected, supercollider sounds different, depending on whether I connect it to the voice or desktop audio input into zoom.
I’ve only tested by recording a solo zoom session so far, and can hear there’s more dynamic range with desktop audio. However I can’t hear it in stereo, which is really what I’m after. I’m hoping that’s just zoom recording in mono for some reason, and that in practice it will be in stereo.. After further tests, it all works very well, with hifi, stereo audio from supercollider, and voice treated as voice. So be aware that the record function in zoom does not have the same audio as the other person hears.. Great! I think though that both sides need to have stereo enabled in the zoom settings for the other party to hear it in stereo – I’m not 100% sure that this is the case, but it’s what I’ve read..
I’ve been enjoying the idea of “research products” as opposed to “research prototypes”. Prototypes are understood as a partially working thing as a step towards an answer to a design problem. Research products on the other hand are understood as they are, rather than what they might become. Here’s how Odom et al describe it in their 2016 CHI paper “From Research Prototype to Research Product”. Unfortunately this is a closed access ACM paper, but you can find a pdf online, for now at least. Here’s the four features of research products that they highlight:
- Inquiry driven: a research product aims to drive a research inquiry through the making and experience of a design artifact. Research products are designed to ask particular research questions about potential alternative futures. In this way, they embody theoretical stances on a design issue or set of issues.
- Finish: a research product is designed such that the nature of the engagement that people have with it is predicated on what it is as opposed to what it might become. It emphasizes the actuality of the design artifact. This quality of finish is bound to the artifact’s resolution and clarity in terms of its design and subsequent perception in use.
- Fit: the aim of a research product is to be lived-with and experienced in an everyday fashion over time. Under these conditions, the nuanced dimensions of human experience can emerge. In our cases, we leveraged fit to investigate research questions related to human-technology relations, everyday practices, and temporality. Fit requires the artifact to balance the delicate threshold between being neither too familiar nor too strange.
- Independent: a research product operates effectively when it is freely deployable in the field for an extended duration. This means that from technical, material, and design perspectives an artifact can be lived with for a long duration in everyday conditions without the intervention of a researcher.
I’m finding this helpful thinking about my live loom. It’s not intended as a commercially viable product, but it’s also not intended as a step towards one. It’s intended to be a device for exploring computation, without automation and all its forced simplicity. It works very well, every time I use it I’m blown away by the generative complexities of handweaving, and it helps me see computer programming language design afresh, with a beginner’s mind. So it’s inquiry driven, and finished in that it’s ready to embody an area of inquiry and host exploration of that. In terms of fit – well its lasercut body and trailing arduino aligns it with 21st century maker culture, and solenoids align it with 20th century electromechanics, but its fundamental design is that of an ancient warp weighted loom, so it has some fit there although it has a lot to learn from the past in terms of ergonomics.
In terms of ‘independence’ it’s not quite there yet, but is designed with open hardware principles, using easy to source parts and permissive CC licensed designs. The next step is supporting others in replicating the hardware which will happen in the next few months. This is where it gets exciting for me – how will the live loom function as an ‘epistemic tool’ – will the research ideas carry with the loom, or will the replicators ‘misunderstand’ the loom and take it in a new direction? Of course the latter case would be failure in one respect, but I get the impression that designers see such failure as positive, where objects support divergent use..
In any case by thinking about the live loom as a research product, it helps me explain what it’s for. When I show it to people, they often treat it as a work-in-progress towards a fully automated loom, like one driven by the famous Jacquard mechanism. That’s the opposite of what I’m trying to do, as that mechanism is what separates humans from the mathematical basis of weaving as computational interference. As a research product, the live loom foregrounds computational augmentation rather than automation.
Research papers as research products
This leads me to think about research papers as research products too – many will have the experience of publishing a research paper, getting excited when someone has cited it, only to find that they’ve totally misunderstood what you were trying to say, even taking the opposite meaning. What if we treated papers as research products, that we deploy in the world, and then observe what they do? I just read Christopher Alexander’s foreword to Richard Gabriel’s book “Patterns of software”. Alexander is an architect (of buildings), and Gabriel is a computer scientist who has studied Alexander’s work for decades in order to try to develop a similar pattern-based approach in software. What’s interesting is that Alexander seems profoundly disappointed in the book that he’s writing a foreword for, although he’s chooses his words generously he basically asks Gabriel to write a different book, and to learn from his more recent work where he solves all the problems in his older work that Gabriel references. It is amazing that Gabriel would host such a text at the front of his book! Really Richard Gabriel is an amazing computer scientist and thinker, and I think Alexander is being a bit naive in assuming that such a comparatively young field of computer science could solve its core problems by going through his four-volume text on designing physical buildings – these are really very different domains indeed. What is more interesting is that Gabriel gives voice to the person he cites. This goes way beyond peer review to giving his text its own life in the process of being published. I’m looking forward to the rest of the book!
I really enjoyed mentoring Lizzie’s project last year as part of the ‘summer of haskell’, which is in turn part of the Google Summer of Code. Every year Google pay students to spend a couple of months over the summer contributing to a free/open source project, and Lizzie spent the time exploring automatic generation of Tidal code. It was a fun time, and sparked off a nice collaboration with Shawn and Jeremy around their awesome Cibo project (which we should really pick up again soon)..
It’s sometimes a bit lonely working on Tidal, as Haskell has the perception of being difficult to learn, especially if you’re used to another language.. But it’s also super interesting and rewarding, a great language to think deeply about representations. Over the last year or so there have been more contributors pop up though with great PRs coming in, so I think a community is slowly forming around the innards, helped by cleaner code, a more complete test suite etc.
Anyway the Summer of Haskell folks are getting ready to accept submissions, and I’ve contributed a Tidal idea to the list – to make Tidal easier to install. The reason this hasn’t been done before is because making a binary distribution of a Haskell interpreter is no mean feat.. But I think it’s possible, would have some interesting aspects and would attract the profound gratitude of a lot of people (Tidal isn’t the easiest to install). I’d be very happy to hear about other Tidal-related projects I could helpfully mentor too.
More info on the summer of haskell here.
I’m excited to be working with some ace people planning a new project “AI as collective performance”, namely Mika Satomi (artist and designer), Berit Greinke (Universität der Künste Berlin and Einstein Center Digital Future) , Juan Felipe Amaya Gonzalez (performance artist) and Deva Schubert (freelance choreographer). We’re part of a cohort of ten projects, exploring the intersection on AI and culture, jointly funded by Stiftung Niedersachsen and VolkswagenStiftung.
Here’s the blurb so far:
The project “AI as collective performance” deals with the explainability of algorithms and artificial intelligence. The goal is to develop a collaborative performance in which the processes behind AI become visible through choreography, interactive costumes, and live coding. Each person represents a node of the network that grows, changes, breaks patterns and creates new ones again. In this project, the human body acts as a processor. Here, a choreographer is also a programmer. By translating AI into physical movements, the complex technology becomes tangible and perceivable.
I’m happy to be working with Antonio Roberts on this mentoring project working with early career Black artists, initially in the Birmingham/West Midlands area. The project is structured around workshop sessions exploring TidalCycles and other live coding technologies and ideas, but the idea is to support the artists involved in taking live coding somewhere new. The call is out now until 14th March. We’re working on this with Christopher Haworth, funded by UKRI as part of his Music and the Internet project. I’m really looking forward to see where the artists take the ideas. Full info including the thinking behind the programme here: algo-afro-futures.lurk.org