A new Litter Survey
In a culmination of litter surveys and litter picks, linked data() and data exploration, and remoteStorage and ActivityPub, I have created a web-based litter pick/survey app that I hope will allow federated citizen science.
In a culmination of litter surveys and litter picks, linked data() and data exploration, and remoteStorage and ActivityPub, I have created a web-based litter pick/survey app that I hope will allow federated citizen science.
My latest litter pick target was Hoe Stream and the White Rose Lane Local Nature Reserve. Here's how it went.
I just created a Gitlab CI job to create a release with information from a CHANGELOG.md file for some of my projects. Here's how I did it.
I noticed something strange happening during build process during a multi-tasking bug fix. Turns out I was using Gitlab CI's caching incorrectly. I should have been using artifacts. Here's what I saw.
As a birthday treat, I took the day off work to try out my electronerised litter picker. Here's how it went.
In preparation for a day of litter picking, I finally got round to a project idea - attaching a camera to a litter picker to record it all. Here's what I did.
I finally started implementing UI testing on first-draft using WebdriverIO. While writing tests was easy, getting the tests running was a little more difficult. Here is how I did it.
Hooray! My new blog is live! Based on Sapper, using MongoDB and eventually ActivityPub and ActivityStreams, it will be my federated posting hub to the world.
Creating this new blog, I wanted to make sure there was no metadata data leaking personal information. Here's how I removed all the metadata tags except the ones I wanted from my photos.
Using tmux for your terminal multiplexer but want an easy to reattach to a session? Here's a small bash script to do it.
Here's how to help your readers save time by making your post's shell commands easy to select and copy - with a simple CSS property.
Making my new blog, I didn't initially set the published dates to be native dates in the database. Here what I did to change them ...and do all the upgrades I needed.
I recently needed to test that some Vue components were creating the correct HTML. To do this, I decided to create snapshots of Object representations of the rendered HTML.
HTML5 number inputs aren't useful, but tel inputs, have all the power
I decided to look into the extortion emails I have been getting and wrote a small script to extract the bitcoin addresses that have been used.
As part of my pledge not to upgrade, I decided to repair two of my failing mice instead of replacing them with a brand new model (as tempting as it was). Here's what I did.
Switching from no server-side-rendering (SSR) to SSR can require a rethinking of your apps loading process. On my blog, I initially, like on many examples, set it up with no SSR and the loading of all content was asynchronous - quick and dirty but it meant that I was able to easily test all of the communication between server and client.
When moving to SSR, you need to determine:
What you choose to do will depend on the app you are trying to create and the user experience you want to achieve.
What content and components you want to be loaded on server render will affect how the page initially looks (and looks for those people with Javascript disabled) and how large the page is to download. Some components, such as graphs and maps, may not be able to be rendered on the server, so may need to be left out.
You may want to produce different renders for humans and computers, such web crawlers. For instance, on a list of blog posts, you may want to get only the first 20 posts for a human, but you might want to get all of the posts for a crawler to ensure all your posts are indexed.
There are many ways to skin an orange and there is even more ways to load data into a store. There are three general ideas I had and saw about loading the data:
Creating an initial state object before creating the store would require having to duplicate all of the loading logic scattered throughout the components and router configuration, and also the data fetching logic located in the application actions. Depending on how complex your app is and how customised you want the loads to be, this could get very complex very quickly and result in a lot of code duplication or having to extract out this logic.
Using the store actions to load the data after the store is created will be simpler than the above, but again depending on the complexity could lead to a lot of duplicated logic and code.
Using the render cycles to load the data require means that there is no duplication of logic or fetching code, however it will be slower than downloading the data directly. It also means that you have to be able to run the render/update cycles multiple times (and in the case of the client app, not have it effect the DOM if you are not included the data with the server render).
Frameworks like Nuxt do this with their component function like the asycData function. However, with Nuxt's asyncData component, you can't (at least at the time of writing) have conditional downloading of data based on props passed to the components.
When the client app activates on the client, it will need the same data that was needed on the server side to render the page as it was on the server. Unfortunately, for some apps (and most app frameworks), this data can not be interpolated from the rendered HTML and therefore must either be downloaded separately from the rendered page, or duplicated in the rendered page in app-readable format, like JSON. Fetching it separately will increase the amount of time before the client app can activate (start managing the DOM). Including the data in rendered page in a app-readable format will increase the size of rendered page - pointlessly if the user doesn't if they use don't have Javascript activated.
When to activate the client app to start managing the DOM will depend on how complete the render was on the server and how much data has been included with the server rendered page. If you activate that app without all the data necessary to render the page as it was on the server will result in nasty page flashes.
Having a fully server rendered page and not including and app-readable data in the render will mean that the app won't be able to activate until all of the data has been fetched. This delay in activating the app will usually not be an issue, unless you use components that can't be rendered without the client app being activate. An example of such a component could be a map or an interactive graph where the code for rendering the component is in the client app. It will mean however that the users without Javascript will get the full content.
Asynchronous components are usually the result of code splitting and allow components to only be loaded when they are required. This can be extremely beneficial when using modules like syntax highlighters, graphing libraries and other complex libraries and it can drastically cut down the amount of Javascript required to be downloaded.
These components do cause problems on during server renders though as they need to be loaded before the final render is complete.
My aim of switching to SSR was be able to serve pages as static content to increase the loading time of my blog. It also had the added benefit of improving the blogs SEO as all content available to the crawler without Javascript. Though this is becoming a mute point with some crawlers nowadays, it also does affect page preview included in most social media apps among others these days.
With all the things above considered, I chose to go down the path of using the render cycles to load the data. I chose this route so that I didn't have to refactor/duplicate much of the logic of the content loading and it also meant I could deal with the async components I had in the app, such as the syntax highlighter.
To do this, I devised a Loader component to track any Promises that were created, both from fetching data and importing components, during the render cycles and rerendering until there was no more new Promises created.
The Loader component was also able to be used to display a loader when navigating in the client app.
I also decided to have multiple levels of render.
The Loader component creates an array where any promises or manual loadings can be tracked. It has a Context Consumer Component to allow
However not doing this meant that I had to either pass the finialised state with the rendered page, or find a way to download the data on the client app without React rerendering the page as it would look with no data loaded.
On each request, I created a new instance of MARSS
One issue encountered when I was implementing the loader and using multiple calls to renderToString was the component classes with instantiated everytime renderToString was called. This meant that any state setting that happeneded after the class was created, for instance in the render loop was not carried over to the next render call. I found this in the markdown component which I had created because it loaded and stored the highlighter module during the render cycle. Changing this to load the module as a module variable when required meant that it would survive the re-instantiation.