Six years ago, I wrote about Simple Features (sf) in R. I mapped the number of pupils per high school in the Perth metro area. At the time, I didn't include how to obtain the shapefile, provided as open data by Landgate on behalf of the Western Australian government through its Shared Location Information Platform (SLIP).

I have now updated the script, available in my code repository, with an R implementation of the methodology in SLIP's How To Guides (Archive).

The relevant code looks as follows, simplified greatly through the use of the httr2 library - the equivalent of the Requests library used in the Python example in the SLIP knowledge base:

tempdirSHP <- tempdir()
tempfileSHP <- tempfile()
# Create the token request
req <- request("") |>
    req_headers("Authorization" = "Basic ZGlyZWN0LWRvd25sb2Fk") |>
    req_body_form(grant_type = "password",
                  # SLIP username and password stored in
                  # pass - the standard unix password manager
                  username = system2("pass", args = " | grep Username | sed -e 's/Username: //'", stdout = TRUE),
                  password = system2("pass", args = " | head -1", stdout = TRUE))
# Obtain the token response
tokenResponse <- req_perform(req)
# Define the SLIP file to download
slipUrl <-  ""
# Create the request for the SLIP file using the received token
req <- request(slipUrl) |>
    req_headers( 'Authorization' = paste0('Bearer ',resp_body_json(tokenResponse)$access_token))
# Obtain the SLIP file using the created request
responseSlip <- req_perform(req)

An updated plot of the high school enrollment numbers looks as follows (for clarity, I've only included the names of schools in the top 5% as ranked by student numbers):

Pupil density in Western Australian high schools

A new approach to modeling using categories and software, facilitating the build of advanced models like digital twins, is being developed at the moment.

During the 2023 SIAM Conference on Computational Science and Engineering, a group of researchers presented their

diagrammatic representations that provide an intuitive interface for specifying the relationships between variables in a system of equations, a method for composing systems equations into a multiphysics model using an operad of wiring diagrams, and an algorithm for deriving solvers using directed hypergraphs, yielding a method of generating executable systems from these diagrams using the operators of discrete exterior calculus on a simplicial set. The generated solvers produce numerical solutions consistent with state of the art open source tools.

As pointed out, mathematics can rarely be isomorphic to its software implementation, yet here the researchers go a long way in enabling that.

Using Julia language, the applied category theorists working on this concept wrote a software (StockFlow) which allows users to build stock-flow diagrams and do all sorts of things with them - from drawing them over to transforming them into other forms like dynamical systems and system structure diagrams, or to solving the underlying differential equations.

The team have also built software (ModelCollab) that hides all the Julia code again, enabling people that aren't educated mathematicians or computer scientists to apply this way of modeling in their work.

This fascinates me, as having a way to write and audit complex systems like digital twins using free and open-source approaches can be transformative in making them accessible for smaller organisations or developed for non-core departments in bigger organisations that up to now are the only ones with enough money or people to develop them for their key operations.

Read more on John Baez's blog.

In a 'blast from the past', I sent my first pingback after writing the previous post. A pingback is a way for a blogger to send a message to another blogger, informing them they've written a post that refers to theirs, e.g. as a reply or an extension of the ideas raised.

The process is a bit more involved than using a webmention, which I've used before and implemented support for a while back, due to requiring an XML message to be created rather than a simple exchange of URLs.

First, I created a file pingback.xml containing the URLs of the blog post I wrote and the one I made reference to within my post. The standard defines the schema, resulting in the following:

<?xml version="1.0" encoding="UTF-8"?>

Next, I used curl on the command-line to send this file in a POST request to Wordpress's pingback service. I had to use the -k option to make this work - bypassing verification of the TLS certificate.

curl -k -d @pingback.xml

In a sign things were going well, I saw the following appear in my website's access log: - - [29/Oct/2023:09:35:06 +0100] "GET /blog/posts/agent_based_models_digital_twins/ HTTP/1.1" 200 2676 "" ";; verifying pingback from"

Finally, I received the following response to my curl request on the command-line:

<?xml version="1.0" encoding="UTF-8"?>
      <string>Pingback from to registered. Keep the web talking! :-)</string>

That "Keep the web talking! :-)" message made me smile.

In order to understand a bit better how things were being processed, I checked the Wordpress code for its pingback service, and it appears they take the title of the linked article as the author, which seems a bit odd. The pingback standard didn't allow for anything but the swapping out of links though. How your reference is summarized on the referred site is entirely left to recipient - who may process pingbacks manually or use a service automating (parts of) the processing.

Wordpress processes pingbacks automatically, turning them into comments on the original post. As the comment text, Wordpress uses the link text in the anchor element with a horizontal ellipsis around it, and some filtering to prevent the comment from being too long. It's odd how the standard didn't define further approaches to make this a bit easier. A pingback attribute in the anchor element would have been helpful for instance, as we could put some text in there to summarise our page when the pingback is processed automatically. Most surprisingly maybe, with the benefit of hindsight, it would have been interesting had the subsequent standard that emerged, Webmention, implemented some further enhancements. Aaron Parecki, author of the Webmention W3C Recommendation, might know if that was ever considered, or just not within the use case for pingbacks / webmentions? There seemed to have been some thought put into it in 2019 at least.