Update 15 Mar 2024: In this morning's Hands-on SAP Dev live stream we added this feature to our test project, so you can see this whole thing in action.
Earlier this month in part 7 of our back to basics Hands-on SAP Dev live stream series on CAP Node.js, we added a new element countryOfBirth
to the Authors
entity definition, so that our simple services.cds
looked like this:
using { cuid, Country } from '@sap/cds/common';
service bookshop {
entity Books : cuid {
title: String;
}
entity Authors : cuid {
name: String;
countryOfBirth: Country;
}
}
This resulted in the generation of lots of DDL for a persistence layer, based on the definition of that Country
type, which, in @sap/cds/common
, looks like this (and I've also included here the definitions that are used to describe that type):
type Country : Association to sap.common.Countries;
context sap.common {
entity Countries : CodeList {
key code : String(3) @(title : '{i18n>CountryCode}');
}
aspect CodeList @(
cds.autoexpose,
cds.persistence.skip : 'if-unused'
) {
name : localized String(255) @title : '{i18n>Name}';
descr : localized String(1000) @title : '{i18n>Description}';
}
}
As a result of referring to the Country
type in @sap/cds/common
, we saw this in the output of cds watch
:
[cds] - loaded model from 2 file(s):
services.cds
[...]/node_modules/@sap/cds/common.cds
See the Appendix - loading @sap/cds/common section for an explanation of why
[...]
has been used as a path prefix indicator here.
Additionally, we saw that as well as entity sets for the Books
and Authors
entities, the OData service also contained two more entity sets, as we can see from the service document (which can be obtained with curl localhost:4004/odata/v4/bookshop | jq .
):
{
"@odata.context": "$metadata",
"@odata.metadataEtag": "W/\"l+enQJd57takPctEB4NIbv/1U6KLaLMKeKijx7AfnOo=\"",
"value": [
{
"name": "Books",
"url": "Books"
},
{
"name": "Authors",
"url": "Authors"
},
{
"name": "Countries",
"url": "Countries"
},
{
"name": "Countries_texts",
"url": "Countries_texts"
}
]
}
as well as in the CAP server landing page:
For those of you wondering why
Countries_texts
is not listed in the CAP server landing page, there's an interesting reason, but that's a story for another time.
The response to an OData query operation on the Countries
entity set looked, however, like this:
{
"@odata.context": "$metadata#Countries",
"value": []
}
No data.
On the one hand, that's understandable, we haven't supplied any. But on the other hand (and like the discussion which took place at the time mentioned) it would be great to have that data. Not only for the Country
type, but also for the other CAP common reuse types Currency, Language and Timezone.
After all, that data is standard, predictable and pretty much static. It's also something that we all take for granted in R/3 systems, for example, in these tables (and their related -T
suffixed language-dependent siblings):
T005
(countries)TCUR
(currencies)T002
(languages)TTZZ
(timezones)In the out-of-the-box provisions from CAP, we don't have this data. But we do have the data in the form of a standard installable NPM package!
I'd totally forgotten about this, which is why I failed to mention it while we were discussing the question. So as a penance (not sure whether to me as the writer of this post, or to you as the reader, sorry) I'm writing up the details here now.
The NPM package @sap/cds-common-content "holds default content based on the ISO specification" for these exact types. Bingo!
The simplest way to make use of this package is to add it to your project:
npm add @sap/cds-common-content
and then add a using
directive in your CDS, such as:
using from '@sap/cds-common-content';
I prefer the semantics of invoking
npm add
overnpm install
, but asadd
is just an alias forinstall
, it's all the same under thenpm
hood anyway.
This all seems quite straightforward. So let's now move away from the customised bookshop project from the live stream series and start with a super simple example project, where we'll see how easy it is to get the data to appear. And it will seem like magic! Then we'll dig in to how it actually works, which will help us understand that bit more about how CAP works. And that's always a bonus, right?
So, moving away from the authors and books in the previous services.cds
file, we'll start with a brand new CAP project for this simple example, so you can follow along too if you want.
While initialising the new project, we'll use the --add
option to request the addition of the "tiny-sample" facet which gives us a super simple service exposing a single Books
entity, complete with a couple of data records supplied in a CSV file.
Here's an example of doing that, with the use of the tree
command at the end to show the contents of the new project directory (excluding the hidden files):
# /home/user/work/scratch
; cds init --add tiny-sample iso-data-test
Creating new CAP project in ./iso-data-test
Adding feature 'nodejs'...
Adding feature 'tiny-sample'...
Successfully created project. Continue with 'cd iso-data-test'.
Find samples on https://github.com/SAP-samples/cloud-cap-samples
Learn about next steps at https://cap.cloud.sap
# /home/user/work/scratch
; cd iso-data-test/
# /home/user/work/scratch/iso-data-test
; tree -F
.
|-- README.md
|-- app/
|-- db/
| |-- data/
| | `-- my.bookshop-Books.csv
| `-- data-model.cds
|-- package.json
`-- srv/
`-- cat-service.cds
5 directories, 5 files
# /home/user/work/scratch/iso-data-test
;
The
-F
option tellstree
to use standard symbols to signify special files, as I want to highlight directories with a trailing/
. This-F
works in a similar way to the same-named option with thels
command.
The persistence layer definitions in db/data-model.cds
look like this:
namespace my.bookshop;
entity Books {
key ID : Integer;
title : String;
stock : Integer;
}
and the service layer definitions in srv/cat-service.cds
look like this:
using my.bookshop as my from '../db/data-model';
service CatalogService {
@readonly entity Books as projection on my.Books;
}
Nothing unexpected there, all nice and straightforward.
Starting the CAP server up at this point, we see (amongst other log lines):
[cds] - loaded model from 2 file(s):
srv/cat-service.cds
db/data-model.cds
[cds] - connect to db > sqlite { database: ':memory:' }
> init from db/data/my.bookshop-Books.csv
/> successfully deployed to in-memory database.
And the service document at http://localhost:4004/odata/v4/catalog
looks like this:
{
"@odata.context": "$metadata",
"@odata.metadataEtag": "W/\"8PKoOs3VhYwQoFzBoQObhMsFJJa5jpD1GLFcWZG9r60=\"",
"value": [
{
"name": "Books",
"url": "Books"
}
]
}
So far so good.
We've covered this in the back to basics series but it's worth re-iterating here too ... the reason why we see these two files specifically:
srv/cat-service.cds
db/data-model.cds
listed in the sources for the CDS model being served (see the "loaded model from 2 file(s)" message in the log output above), is that they're in some specially named directories (db/
and srv/
) that form part of CAP's convention-over-configuration approach to doing the right thing by developers. On startup, the server will automatically look in certain "well-known" locations for CDS definitions. What are these "well-known" locations?
You can ask to see them like this:
cds env folders
which emits:
{ db: 'db/', srv: 'srv/', app: 'app/' }
In fact, there's another environment value which it's possible to query, and that is roots
:
cds env roots
and the value returned:
[ 'db/', 'srv/', 'app/', 'schema', 'services' ]
contains these three directory names, plus two special filenames services
and schema
, which we can interpret as services.cds
and schema.cds
respectively.
š” This is incidentally why, in the simple capb2b project we're using for the early episodes in the back to basics series, simply putting all our content into a file called services.cds works!
Now to start moving towards the use of the Country
type.
First, let's add an element to the Books
entity definition to show where a book was published. We'll do this (and other enhancements in this experiment) in a separate CDS file, to remind ourselves of how well thought out and capable the CDS language and compilation process is.
In a new file, let's call it db/publicationinfo.cds
, let's add this:
using from './data-model';
using { Country } from '@sap/cds/common';
extend my.bookshop.Books with {
publishedIn: Country;
}
using
directive just imports the definitions from the existing db/data-model.cds
file, i.e. the Books
entity in the my.bookshop
namespace. With this first using
directive we can then refer to the my.bookshop.Books
entity, as we do with the extend
directive shortly.using
directive is to bring in the definition of the Country reuse type from @sap/cds/common
. This is so we can use this Country
type to describe the new element we're adding to the my.bookshop.Books
entity.extend
directive we can simply add the new publishedIn
element and define it as having the Country
type. We already know about how this type is defined from the background section earlier.As the CAP server is still running in "watch" mode, things restart and now we see something like this:
[cds] - loaded model from 4 file(s):
srv/cat-service.cds
db/publicationinfo.cds
[...]/node_modules/@sap/cds/common.cds
db/data-model.cds
[cds] - connect using bindings from: { registry: '~/.cds-services.json' }
[cds] - connect to db > sqlite { database: ':memory:' }
> init from db/data/my.bookshop-Books.csv
/> successfully deployed to in-memory database.
The log output here shows us that there are two new files in the list from which the model has been loaded:
db/publicationinfo.cds
[...]/node_modules/@sap/cds/common.cds
What's happened of course is that a new file, publicationinfo.cds
, is discovered in the well-known db/
directory. So that is loaded and added to the overall model compilation. And within that file, the using { Country } from '@sap/cds/common';
directive causes the corresponding file from that (built-in) NPM package @sap/cds/common
to be loaded too. Nice!
And now the CAP server has restarted, serving the new enhanced overall model, we see with a request like this:
curl -s localhost:4004/odata/v4/catalog | jq .
that the OData service document now sports the Countries
and Countries_texts
entity sets too (because, briefly, sap.common.Countries
is defined as an entity in [...]/node_modules/@sap/cds/common.cds
and therefore is exposed as an entity set in the OData service):
{
"@odata.context": "$metadata",
"@odata.metadataEtag": "W/\"2n4HnZUJly4q6xyptJ6+4ZptvIICSUNdpk6NYX73bGY=\"",
"value": [
{
"name": "Books",
"url": "Books"
},
{
"name": "Countries",
"url": "Countries"
},
{
"name": "Countries_texts",
"url": "Countries_texts"
}
]
}
Again. So far, so good.
We have some books data, courtesy of the two CSV records that came as part of the "tiny-sample" facet, which we can see with:
curl -s localhost:4004/odata/v4/catalog/Books | jq .
which returns this entity set response:
{
"@odata.context": "$metadata#Books",
"value": [
{
"ID": 1,
"title": "Wuthering Heights",
"stock": 100,
"publishedIn_code": null
},
{
"ID": 2,
"title": "Jane Eyre",
"stock": 500,
"publishedIn_code": null
}
]
}
Those eagle-eyed amongst you might be wondering about the
publishedIn_code
property. That's one that's been auto-generated as a result of CAP's excellent managed associations constructs.Here, specifically, it comes from the combination of the
Country
type used to describe thepublishedIn
element:extend my.bookshop.Books with {
publishedIn: Country;
}and the very definition of
Country
which is a managed "to-one" association as we have already seen:type Country : Association to sap.common.Countries;
This results in, amongst other things, a foreign-key relationship being needed, and being realised via the construction of a property for this, made up of the names of the two elements in the relationship, i.e.
publishedIn
(the new element inmy.bookshop.Books
)code
(the key element insap.common.Countries
)joined with an
_
underscore character to becomepublishedIn_code
.
However, we don't yet have any country data. Requesting the equivalent entity set like this:
curl -s localhost:4004/odata/v4/catalog/Countries | jq .
returns a rather sad and empty looking entity set:
{
"@odata.context": "$metadata#Countries",
"value": []
}
So ... @sap/cds-common-content
to the rescue!
Like we saw earlier, this can be brought into the project simply by adding it as a package. So let's do that now:
npm add @sap/cds-common-content
After seeing output like this:
added 115 packages, and audited 116 packages in 17s
21 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
we see that package.json
now has the package listed in the dependencies
section:
{
"name": "iso-data-test",
"version": "1.0.0",
"description": "A simple CAP project.",
"repository": "<Add your repository here>",
"license": "UNLICENSED",
"private": true,
"dependencies": {
"@sap/cds": "^7",
"@sap/cds-common-content": "^1.4.0",
"express": "^4"
},
"devDependencies": {
"@cap-js/sqlite": "^1"
},
"scripts": {
"start": "cds-serve"
}
}
and the rest of the dependencies have been installed too (through a more general NPM install side-effect) - we can see this with npm list
; here's an example invocation:
; npm list
iso-data-test@1.0.0 /home/user/work/scratch/iso-data-test
+-- @cap-js/sqlite@1.5.1
+-- @sap/cds-common-content@1.4.0
+-- @sap/cds@7.7.2
`-- express@4.18.3
So now, to actually make use of this package and what it brings, we have to add a using
directive, as we saw earlier.
Let's add that to the db/publicationinfo.cds
file, like this:
using from './data-model';
using { Country } from '@sap/cds/common';
using from '@sap/cds-common-content';
extend my.bookshop.Books with {
publishedIn: Country;
}
As the CAP server is still running in "watch" mode, it restarts, and š„ what an explosion of log output!
[cds] - loaded model from 6 file(s):
srv/cat-service.cds
db/publicationinfo.cds
node_modules/@sap/cds-common-content/index.cds
db/data-model.cds
node_modules/@sap/cds-common-content/db/index.cds
node_modules/@sap/cds/common.cds
[cds] - connect using bindings from: { registry: '~/.cds-services.json' }
[cds] - connect to db > sqlite { url: ':memory:' }
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_zh_TW.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_zh_CN.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_tr.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_th.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_sv.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ru.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ro.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_pt.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_pl.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_no.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_nl.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ms.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ko.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ja.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_it.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_hu.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_fr.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_fi.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_es_MX.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_es.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_en.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_de.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_da.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_cs.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries_texts_ar.csv
> init from node_modules/@sap/cds-common-content/db/data/sap-common-Countries.csv
> init from db/data/my.bookshop-Books.csv
/> successfully deployed to in-memory database.
Wow! What's more, we now have country data in the Countries
entity set:
curl -s 'localhost:4004/odata/v4/catalog/Countries?$top=5' \
| jq .
{
"@odata.context": "$metadata#Countries",
"value": [
{
"name": "Andorra",
"descr": "Andorra",
"code": "AD"
},
{
"name": "Utd Arab Emir.",
"descr": "United Arab Emirates",
"code": "AE"
},
{
"name": "Afghanistan",
"descr": "Afghanistan",
"code": "AF"
},
{
"name": "Antigua/Barbuda",
"descr": "Antigua and Barbuda",
"code": "AG"
},
{
"name": "Anguilla",
"descr": "Anguilla",
"code": "AI"
}
]
}
That's fab. But. What's going on? Where is this coming from? How does this work?
Let's take a bit of time to figure out how this is all working, and why we now have country data.
We added a single line to the CDS model:
using from '@sap/cds-common-content';
What did the addition of this line actually do to cause that explosion of change and the appearance of ISO country data?
Well, our first clue is the extra entries that now are appearing in the list of files from which the CDS model is constructed:
[cds] - loaded model from 6 file(s):
srv/cat-service.cds
db/publicationinfo.cds
node_modules/@sap/cds-common-content/index.cds
db/data-model.cds
node_modules/@sap/cds-common-content/db/index.cds
node_modules/@sap/cds/common.cds
Working through them, we first see our base files:
srv/cat-service.cds
db/data-model.cds
We also see the two extra files that were picked up once we added the db/publicationinfo.cds
file which itself summoned the @sap/cds/common
content:
db/publicationinfo.cds
node_modules/@sap/cds/common.cds
At this point I've stopped being deliberately vague (with
[...]
) about the specific location of the files innode_modules/
, because it's worth highlighting here something that has changed. Thenpm add
action just before caused the rest of the packages (defined inpackage.json
) to be installed in the project. This means that there's now a project-localnode_modules/
directory containing everything that this project needs, including all the@sap
prefixed NPM packages.A quick
tree -F -L 3
shows the directory structure containing those resources (I've removed some of the output for brevity):./
|-- README.md
|-- app/
|-- db/
| |-- data/
| | `-- my.bookshop-Books.csv
| |-- data-model.cds
| `-- publicationinfo.cds
|-- node_modules/
| |-- @cap-js/
| | |-- cds-types/
| | |-- db-service/
| | `-- sqlite/
| |-- @sap/
| | |-- cds/
| | |-- cds-common-content/
| | |-- cds-compiler/
| | |-- cds-fiori/
| | `-- cds-foss/
| |-- ...
| `-- yaml/
| |-- LICENSE
| |-- README.md
| |-- bin.mjs*
| |-- browser/
| |-- dist/
| |-- package.json
| `-- util.js
|-- package-lock.json
|-- package.json
`-- srv/
`-- cat-service.cdsSo this means that the
@sap/cds/common
package is being loaded now from the project-local set of packages, i.e. innode_modules/
relative to the project directory (i.e../node_modules/
) and not the global NPM package area any more.This in turn means that the full (relative) path to this file now in the list is clean and short(er):
node_modules/@sap/cds/common.cds
OK, so we know why these four of the six files are being loaded, and where from:
srv/cat-service.cds
db/data-model.cds
db/publicationinfo.cds
node_modules/@sap/cds/common.cds
So what about the other two in the list:
node_modules/@sap/cds-common-content/index.cds
node_modules/@sap/cds-common-content/db/index.cds
which are now also being brought in to construct the model?
Well, given the "cds-common-content" that appears in the path of these files, we can be pretty certain that they're related to this single line we just added:
using from '@sap/cds-common-content';
so what is actually happening here?
Well, if we look a bit deeper inside the @sap/cds-common-content
package, like this:
tree -F -L 2 node_modules/@sap/cds-common-content
there's a bit of a clue in the output, especially if we're familiar with CAP's Reuse and Compose concepts:
node_modules/@sap/cds-common-content/
|-- CHANGELOG.md
|-- LICENSE
|-- README.md
|-- db/
| |-- index.cds
| `-- data/
|-- index.cds
`-- package.json
3 directories, 6 files
Look at those index.cds
files. The Using index.cds Entry Points section of the Reuse and Compose section of the Capire documentation says this, in the context of a using
directive like we have here (i.e. using from '@sap/cds-common-content';
):
"The using from
statements assume that the imported packages provide index.cds
in their roots as public entry points, which they do."
So, that means that the using
directive will cause this file:
node_modules/@sap/cds-common-content/index.cds
to also be loaded and its CDS contents added into the overall model construction. So what's in this file? This:
using from './db';
š Curiouser and curiouser!
So let's now follow this using
directive, the resource reference within which should be interpreted as local to the containing index.cds
file, i.e. we're now going to follow this path to ./db/
, which also contains an index.cds
:
node_modules/@sap/cds-common-content/
|-- CHANGELOG.md
|-- LICENSE
|-- README.md
|-- db/
| |-- index.cds <--------------------+
| `-- data/ |
|-- index.cds -- using from './db'; ---+
`-- package.json
So what's in ./db/index.cds
? This:
using sap.common.Languages from '@sap/cds/common';
using sap.common.Countries from '@sap/cds/common';
using sap.common.Currencies from '@sap/cds/common';
using sap.common.Timezones from '@sap/cds/common';
Ooh!
What is the effect of doing this? Well it has a direct effect and a sort of side-effect too.
The direct effect is that all four entity definitions referenced in this ./db/index.cds
file, that is to say these four:
sap.common.Languages
sap.common.Countries
sap.common.Currencies
sap.common.Timezones
are added to the overall model.
But the side-effect is that the ./db/data/
directory here also becomes a candidate location for the automatic provision of initial data!
And what's in that ./db/data/
directory? Let's have a look:
ls node_modules/@sap/cds-common-content/db/data/
Lots and lots of CSV files (I've cut the list down to just a few here):
./ sap-common-Languages_texts_cs.csv
../ sap-common-Languages_texts_da.csv
sap-common-Countries.csv sap-common-Languages_texts_de.csv
sap-common-Countries_texts.csv sap-common-Languages_texts_en.csv
sap-common-Countries_texts_ar.csv sap-common-Languages_texts_es.csv
sap-common-Countries_texts_zh_CN.csv sap-common-Timezones_texts_cs.csv
sap-common-Countries_texts_zh_TW.csv sap-common-Timezones_texts_da.csv
sap-common-Currencies.csv sap-common-Timezones_texts_de.csv
sap-common-Currencies_texts.csv sap-common-Timezones_texts_el.csv
sap-common-Currencies_texts_ar.csv sap-common-Timezones_texts_en.csv
...
And we know what happens with initial data, for those entities whose namespaced-names match up with CSV filenames in directories such as these - the data is automatically imported to become data for those entities!
Notice too that for each of the four entities, there is a single CSV file containing the core data, and multiple CSV files for the corresponding localized elements, in accompanying _texts*
suffixed files.
Here's one example, for the sap.common.Countries
entity. A neat list of the intial CSV data files for this entity can be retrieved with:
cd node_modules/@sap/cds-common-content/db/data/ \
&& ls -1 sap-common-Countries*.csv
which produces:
sap-common-Countries.csv
sap-common-Countries_texts.csv
sap-common-Countries_texts_ar.csv
sap-common-Countries_texts_cs.csv
sap-common-Countries_texts_da.csv
sap-common-Countries_texts_de.csv
sap-common-Countries_texts_en.csv
sap-common-Countries_texts_es.csv
sap-common-Countries_texts_es_MX.csv
sap-common-Countries_texts_fi.csv
sap-common-Countries_texts_fr.csv
sap-common-Countries_texts_hu.csv
sap-common-Countries_texts_it.csv
sap-common-Countries_texts_ja.csv
sap-common-Countries_texts_ko.csv
sap-common-Countries_texts_ms.csv
sap-common-Countries_texts_nl.csv
sap-common-Countries_texts_no.csv
sap-common-Countries_texts_pl.csv
sap-common-Countries_texts_pt.csv
sap-common-Countries_texts_ro.csv
sap-common-Countries_texts_ru.csv
sap-common-Countries_texts_sv.csv
sap-common-Countries_texts_th.csv
sap-common-Countries_texts_tr.csv
sap-common-Countries_texts_zh_CN.csv
sap-common-Countries_texts_zh_TW.csv
We can see the three groups of files for sap.common.Countries
:
sap-common-Countries.csv
containing values for the code
, name
and descr
fields (in English as default).sap-common-Countries_texts.csv
- where the filename is not specific to an explicit locale - containing values for the locale
, code
, name
and descr
field. The language specific content is English by default.sap-common-Countries_texts_<locale-identifier>.csv
containing the same data as the core localized file but with the texts translated into the language indicated by the locale-identifier in the file name.You'll likely remember seeing a list of all these CSV files in the explosion of output from the running CAP server earlier.
And what's the outcome of this?
To find out, let's expand the core books data now to include values for the new publishedIn_code
field at the persistence layer, so that the content of db/data/my.bookshop-Books.csv
now looks like this:
ID;title;stock;publishedIn_code
1;Wuthering Heights;100;DE
2;Jane Eyre;500;HU
Yes I know Wuthering Heights wasn't published in Germany, nor was Jane Eyre published in Hungary, but thank you for wondering about that.
Now, NOT ONLY (Ī¼ĪĪ½) can we follow navigation properties to see the publication countries for our books, like this (note the country names "Germany" and "Hungary"):
curl \
--silent \
--url 'localhost:4004/odata/v4/catalog/Books?$expand=publishedIn' \
| jq .
to get this:
{
"@odata.context": "$metadata#Books(publishedIn())",
"value": [
{
"ID": 1,
"title": "Wuthering Heights",
"stock": 100,
"publishedIn_code": "DE",
"publishedIn": {
"name": "Germany",
"descr": "Germany",
"code": "DE"
}
},
{
"ID": 2,
"title": "Jane Eyre",
"stock": 500,
"publishedIn_code": "HU",
"publishedIn": {
"name": "Hungary",
"descr": "Hungary",
"code": "HU"
}
}
]
}
BUT ALSO (Ī“Ī) we can ask for this information in our own language (locale)! Here's an example, requesting the same resource but with a different locale (French), via standard HTTP headers:
I'm fond of the strong particle pairing of Ī¼ĪĪ½ ... Ī“Ī which I first learned about as an important construct in Ancient Greek, and I find myself often using the (English) equivalent ("not only ... but also").
curl \
--silent \
--header 'Accept-Language: fr' \
--url 'localhost:4004/odata/v4/catalog/Books?$expand=publishedIn' \
| jq .
The representation of the resource requested is now different, in that the names of the countries are now in French ("Allemagne" and "Hungarie"):
{
"@odata.context": "$metadata#Books(publishedIn())",
"value": [
{
"ID": 1,
"title": "Wuthering Heights",
"stock": 100,
"publishedIn_code": "DE",
"publishedIn": {
"name": "Allemagne",
"descr": "Allemagne",
"code": "DE"
}
},
{
"ID": 2,
"title": "Jane Eyre",
"stock": 500,
"publishedIn_code": "HU",
"publishedIn": {
"name": "Hongrie",
"descr": "Hongrie",
"code": "HU"
}
}
]
}
C'est vraiment magnifique!
Of course, one wouldn't often use an explicit Accept-Language
header in an HTTP request, but think of what headers your browser sends by default when making requests, and think of what your French colleague's browser might send. Exactly!
OK, I think I've reached a point where I can now safely escape this rabbit hole of discovery. The bottom line is that there is a standard NPM package available that provides actual ISO data for the four CAP reuse types. You've learned how to use it, and what it provides. Moreover, you've learned how it provides what it does, and what goes on behind the scenes. That, hopefully, has given you a tiny bit more insight into the wonders of CAP.
And there's still so much more to discover. Until next time!
In the Background section earlier, the output of cds watch
showed this line:
[...]/node_modules/@sap/cds/common.cds
The reason I included a [...]
"prefix" was to signify that there will be a different path shown depending on whether an npm install
of the project dependencies (which include @sap/cds
) has been executed, or not.
If an npm install
has been executed, then there will be a project-local node_modules/
directory, and the @sap/cds/common
resource will have been loaded from within there, i.e.
./node_modules/@sap/cds/common.cds
If an npm install
hasn't been executed, and we're running the CAP server from the NPM globally installed @sap/cds-dk
package, then it will have been loaded from that globally installed package location, and look something like this (depending on what your NPM global package management directory setup looks like):
../../usr/local/share/npm-global/lib/
node_modules/@sap/cds-dk/
node_modules/@sap/cds/common.cds
This location value would typically be shown in a single line; I've split the value up over a few lines purely for readability.
You can find out where your (normally globally installed) @sap/cds-dk
package is, using the cds env
command:
cds env _home_cds-dk
In the context of this particular dev container in which I'm working, this is the value that is emitted:
'/usr/lib/node_modules/@sap/cds-dk/'
]]>
to_entries
and from_entries
can be, and show how with_entries
is a great extension of that.
I pulled some stats from the YouTube Data API v3 for the episodes so far in our Back to basics: CAP Node.js Hands-on SAP Dev live stream series. Surprisingly, the values for numbers of views and likes and so on are in string representation. Here's what the dataset (in a file called series.json
) looks like as of right now:
[
{
"id": "gu5r1EWSDSU",
"title": "Back to basics with CAP - part 1",
"statistics": {
"viewCount": "8916",
"likeCount": "247",
"favoriteCount": "0",
"commentCount": "15"
}
},
{
"id": "8N2TxgZ9bjY",
"title": "Back to basics with CAP - part 2",
"statistics": {
"viewCount": "4045",
"likeCount": "91",
"favoriteCount": "0",
"commentCount": "5"
}
},
{
"id": "mTvjAthGjBg",
"title": "Back to basics with CAP - part 3",
"statistics": {
"viewCount": "2708",
"likeCount": "79",
"favoriteCount": "0",
"commentCount": "11"
}
},
{
"id": "1ywiOaGVA5w",
"title": "Back to basics with CAP - part 4",
"statistics": {
"viewCount": "2082",
"likeCount": "72",
"favoriteCount": "0",
"commentCount": "8"
}
},
{
"id": "fgqnptEgUW4",
"title": "Back to basics with CAP - part 5",
"statistics": {
"viewCount": "1926",
"likeCount": "47",
"favoriteCount": "0",
"commentCount": "8"
}
},
{
"id": "NZj7Q4LBotA",
"title": "Back to basics with CAP - part 6",
"statistics": {
"viewCount": "1545",
"likeCount": "43",
"favoriteCount": "0",
"commentCount": "11"
}
}
]
If you look at the videos resource documentation, you'll see that the intended representations for these resources are indeed strings:
Anyway, for various reasons, including a desire to surface this info in a custom Home Assistant dashboard, and to be able to perform calculations upon the values, I wanted all the figures as numbers rather than strings.
While using with_entries
is not entirely natural for me yet, I find I'm only a step away, because I know that to_entries
brings me from what I am starting with to something a lot closer to a structure that I can automatically manipulate. What I mean is that identifying the four separate properties within the statistics
object is difficult to do automatically as they have dynamic names, but using to_entries
makes that problem go away, especially with its sibling from_entries
to turn things back again to how they were.
And once I have got that straight in my head, I know I can turn to the cousin of to_entries
and from_entries
, namely with_entries
.
Here's an example of what to_entries
does to the statistics
object in the first object in the array (to keep things brief).
In all the following examples, I'll just show the jq. For example, the
first | .statistics
jq directly below would actually be invoked like this:jq 'first | .statistics' series.json
. Also, longer jq invocations will be wrapped with newlines for readability.
First, let's identify the focus of transformation:
first | .statistics
This shows us:
{
"viewCount": "8916",
"likeCount": "247",
"favoriteCount": "0",
"commentCount": "15"
}
Then, applying to_entries
like this:
first | .statistics | to_entries
we get:
[
{
"key": "viewCount",
"value": "8916"
},
{
"key": "likeCount",
"value": "247"
},
{
"key": "favoriteCount",
"value": "0"
},
{
"key": "commentCount",
"value": "15"
}
]
Now each of the properties (e.g. "viewCount": "8916"
) are normalised into objects, each containing static key names key
and value
(e.g. { "key": "viewCount", "value": "8916" }
), and all these objects are contained in an array.
This then means we can apply a general transformation over that array. So let's try tonumber
, like this:
first | .statistics | to_entries
| map(.value |= tonumber)
This results in:
[
{
"key": "viewCount",
"value": 8916
},
{
"key": "likeCount",
"value": 247
},
{
"key": "favoriteCount",
"value": 0
},
{
"key": "commentCount",
"value": 15
}
]
And to get back to the structure we started with is a job for from_entries
:
first | .statistics | to_entries
| map(.value |= tonumber)
| from_entries
This results in:
{
"viewCount": 8916,
"likeCount": 247,
"favoriteCount": 0,
"commentCount": 15
}
Nice!
Using to_entries
, mapping over the entries in the resulting array, then using from_entries
is such a common pattern that there's also with_entries
which is a built-in defined itself in jq:
def with_entries(f): to_entries | map(f) | from_entries;
So we can reduce the previous incantation down to:
first | .statistics | with_entries(.value |= tonumber)
This gives us exactly the same result.
So using with_entries
we can transform the statistics properties of all of the video entries, like this:
map(.statistics |= with_entries(.value |= tonumber))
[
{
"id": "gu5r1EWSDSU",
"title": "Back to basics with CAP - part 1",
"statistics": {
"viewCount": 8916,
"likeCount": 247,
"favoriteCount": 0,
"commentCount": 15
}
},
{
"id": "8N2TxgZ9bjY",
"title": "Back to basics with CAP - part 2",
"statistics": {
"viewCount": 4045,
"likeCount": 91,
"favoriteCount": 0,
"commentCount": 5
}
},
{
"id": "mTvjAthGjBg",
"title": "Back to basics with CAP - part 3",
"statistics": {
"viewCount": 2708,
"likeCount": 79,
"favoriteCount": 0,
"commentCount": 11
}
},
{
"id": "1ywiOaGVA5w",
"title": "Back to basics with CAP - part 4",
"statistics": {
"viewCount": 2082,
"likeCount": 72,
"favoriteCount": 0,
"commentCount": 8
}
},
{
"id": "fgqnptEgUW4",
"title": "Back to basics with CAP - part 5",
"statistics": {
"viewCount": 1926,
"likeCount": 47,
"favoriteCount": 0,
"commentCount": 8
}
},
{
"id": "NZj7Q4LBotA",
"title": "Back to basics with CAP - part 6",
"statistics": {
"viewCount": 1545,
"likeCount": 43,
"favoriteCount": 0,
"commentCount": 11
}
}
]
Perfect!
]]>code
in VS Code, which, when invoked in the terminal (e.g. code services.cds
) opens the file directly in a VS Code editor window, like this:
The question was about code
being recognised in SAP Business Application Studio (BAS) dev spaces.
Basically, while code
is not a command that's available in dev spaces, there's a BAS-specific command basctl
which has a couple of options, one of which is --open
. Here are some examples, taken from the usage text:
Examples
$ basctl --open http://sap.com
$ basctl --open http://localhost:8082/tmp
$ basctl --open file:///home/user/projects/proj1/myfile.txt
$ basctl --open /myfile.txt
$ basctl --open ./myfolder/myfile.txt
So while there isn't a code
command, you can use basctl --open
to get something similar. I say similar, because for some reason I cannot yet fathom (my small brain, again) it opens the file in a new column. Anyway, here's what it looks like in action:
(I've asked internally about this behaviour, and will update this blog post with anything I find out.)
(Update 05 Mar 2024: It turns out this was unintended behaviour, which my question internally highlighted, and the behaviour has now been fixed - see pull request 299 in the app-studio-toolkit repo. The fix will reach production environments by the middle of this month.)
The nice thing about what basctl
offers perhaps is the ability to invoke framework commands, via an additional --command
option, like this: basctl --command workbench.action.openSettings
.
The question also asked about my use of tree
, and noted its lack of availability in BAS dev spaces. This is simple to address, if not entirely straightforward. I got tree
working in my dev space, as you can see:
I did this by copying in a tree
binary (and ensuring the execution bit was set). Where did I get that tree
binary from? Well, first, I looked what the architecture of the dev spaces was, via uname
(I've added whitespace for readability):
user: user $ uname -a
Linux workspaces-ws-nvzxc-deployment-9f9b9b656-sfdh5
5.15.135-gardenlinux-cloud-amd64
SMP Debian 5.15.135-0gardenlinux1 (2023-10-12)
x86_64 GNU/Linux
I also checked what distribution the environment was based on:
user: user $ cat /etc/issue
Debian GNU/Linux 12 \n \l
Basically, it's Debian 12 on x86_64 architecture. Classic. So then I created a quick container from a Debian 12 based container image, via a codespace that I spun up for the purpose, and copied the tree
binary out of there to my local filesystem, like this:
gh codespace cp 'remote:/usr/bin/tree' .
I then copied that binary to the dev space by dragging it into the Explorer window, and then set the execution bit with chmod +x $HOME/tree
.
Job done!
]]>I arrived in Wroclaw, in the west of Poland, on Sunday, and met up with my good friend, Developer Advocate colleague, local resident and fellow beer and food enthusiast Witalij Rudnicki, where we visited the 100 Bridges Brewery. What a great start!
On the following day (Monday) I arrived at the Capgemini offices in Wroclaw, greeted by Dominik in a room that was perfect for a day of learning and networking. It was a full house, not surprising given the CodeJam topic, which was the SAP Cloud Application Programming Model ("CAP" to us humans). It's such a fascinating and very capable framework, built with love, enthusiasm and skill, which is evident in both its philosophy and its codebase.
If you're interested in checking out the content of the CAP CodeJam that I ran, it's available publicly (like all our other CodeJams) on GitHub: Service Integration with SAP Cloud Application Programming Model.
There were plenty of refreshments, including that classic developer fuel, pizza, which we devoured half way through the exercises.
The next day was a travel day, where Witalij drove us the almost 400km to Warsaw, in time to do a couple of cool things that evening. After checking into our respective hotels, we headed to the Apple Museum. Such a wonderful exhibition of Apple memorabilia, from a private collector, it was fascinating to look at everything from a replica of the original Apple I, through the classic Apple II range, and all the way through to today. I spotted an Apple III which is the computer that features in the super Hands-on SAP Dev stickers designed and produced by the great Ronnie Sletta.
What made this an even better experience was that the museum was hosted inside a renovated factory from an older industrial era, and there were lots of factory artifacts preserved. Inside this factory was also a great beer place which was the location for an SAP Stammtisch that Witalij had also organised. Cheers!
And then the following day, we all met up at the KMD Poland offices for another CodeJam, on the same topic. Again, it was a full house, over a different but equally great room layout, and just like Wroclaw on Monday, the participants worked hard on the exercises, got to know each other, asked great questions and contributed valuable opinions too. The key ingredients to a successful CodeJam, I'd say.
One of the most enjoyable aspects of this trip for me was being looked after by Witalij. He showed me some great places, helped me find and experience some great food - traditional Polish food and indeed Georgian cuisine too. He even helped me find my way around both cities, and see some sights.
If you're interested in hosting a CodeJam, head on over to this blog post: So, You Want to Host a CodeJam! Everything you need to know, and I'll perhaps see you at the next one!
]]>At the bow, there's the well deck.
At the stern there's a large open space, it being a cruiser style design.
Neither of these spaces have been covered, and it's been like this since I launched. I wanted to cruise with and live on the narrowboat for a while before deciding what covers, if any, would work for me.
The open spaces are wonderful, but sometimes it's good to have somewhere to store stuff outside of the cabin but in a place that won't get wet if it rains. And somewhere to hang wet coats and put dirty boots after a day cruising in all weathers. Even somewhere away from inside the cabin to put up the maiden to dry clothes after a cycle in the washing machine.
Having spent time wandering around moorings in marinas when I was visiting, I noticed that all of the covers I was impressed with came from the same company - All Seasons Boat Covers. They're based in Atherton in the greater Manchester area, and convenient for my current temporary winter location here in a marina on the western section of the Leeds & Liverpool canal.
I got in contact with them, and was visited by Gary, with whom I discussed my needs. I had originally decided that I wanted covers for both ends:
A full canopy over the stern is often referred to as a "pram hood" because of how the frame and canvas mechanism folds up and down. And the term for a cover over the well deck, which is also known as the "cratch", is a "cratch cover".
Traditionally a cratch cover consists of canvas plus a wooden component, made up of a frame, often triangular, mounted towards the front of the bow, and a straight wooden plank that spans the space over the well deck between that frame and the front of the cabin roof. The wooden frame thus forms a shape over which the canvas can be fitted. On The Fitout Pontoon's page on Covers & Canopies you can see an example of one of these:
While I'd originally decided I wanted to go for a cover at both ends, i.e. have a pram hood and a cratch cover, I changed my mind after a conversation with Gary, when I realised that as a continuous cruiser, erecting and taking down the pram hood at the stern each morning and evening would get rather tedious, even more so if the weather happened to be inclement.
Moreover, it would be a relatively complex affair given the size of my cruiser stern, and the areas that would need covering with canvas. Not to mention that complex also meant pricey. It was actually Gary that played a significant part in my decision against a pram hood, so hats off to him for helping me reach the right decision despite what would then turn out to be a lower spend by me.
While I had decided to forgo a pram hood, I did decide to get a cratch cover. I was originally planning on having both, so the lack of any cover over the stern made a cover over the bow even more important for me.
Rather than going for a traditional style cratch cover, I went for the more modern take, which uses steel poles to create a frame over which the canvas is fitted. This for me afforded a number of advantages:
All these advantages stem from the fact that such a modern tubular frame is super lightweight and far less bulky than the very present and substantial wooden frame of a traditional construction. This also leads to the name: "ghost cratch", which I guess is a nod to the fact that something (the bulk) is not present. I don't know how widespread this term is, but I like it and am going to use it from now on.
A week after I'd agreed the details with Gary, a two-man team of JP and Ray turned up to fit the steel frame first. Here's what it looked like:
They then measured the entire area for the canvas, and went away to source and cut out the material. Measure twice, cut once, and so on.
They returned yesterday with the canvas, and mounted it beautifully, using studs and tieback connectors for a snug fit.
Here are a few photos of the finished item.
The sides are removable, or I can just unzip the side "doors", roll them up and fasten them at the top. The front is not removable but I can roll it up completely in a similar way to the side doors.
All in all, I'm very pleased with it, and it makes a great addition to the usable dry space.
]]>I had a task to complete this morning that involved managing some changes to a large repo on GitHub, and my Internet connection where I am right now was not conducive for cloning the repo, given its size. So I thought I'd try out doing it in a GitHub codespace, which would have much better connectivity to the rest of the Internet and to which I'd only need a thin connection for my actual remote terminal session.
While most folks will likely manage, access and use codespaces directly on the Web, or remotely via VS Code and the GitHub Codespaces extension, they're also available directly from the command line, i.e. you can attach to the container via SSH. And that's what I wanted to do.
I thought I'd document my exploratory journey here, mostly for my future self. The basis for the exploration was this resource: Using GitHub Codespaces with GitHub CLI, which describes using the GitHub CLI (gh
), with the new command codespace
, which has various subcommands:
; gh codespace
Connect to and manage codespaces
USAGE
gh codespace [flags]
AVAILABLE COMMANDS
code: Open a codespace in Visual Studio Code
cp: Copy files between local and remote file systems
create: Create a codespace
delete: Delete codespaces
edit: Edit a codespace
jupyter: Open a codespace in JupyterLab
list: List codespaces
logs: Access codespace logs
ports: List ports in a codespace
rebuild: Rebuild a codespace
ssh: SSH into a codespace
stop: Stop a running codespace
view: View details about a codespace
INHERITED FLAGS
--help Show help for command
LEARN MORE
Use `gh <command> <subcommand> --help` for more information about a command.
Read the manual at https://cli.github.com/manual
I'm a big fan of the GitHub CLI. It has some great interactive features, so you can invoke a command, supplying little to no parameter information, and it will prompt you interactively as required. But here I wanted to be able to invoke each command completely, with values for all appropriate parameters.
Codespaces seem to be primarily repo specific (although it looks like they can be org-wide too), so in order to be able to create a new codespace in this experiment I first created a new repo, adding a README as I think I saw somewhere that you can't create a codespace based on a completely empty repo (which sort of makes sense):
; gh repo create --add-readme --public codespacetest
ā Created repository qmacro/codespacetest on GitHub
Now I can think about creating a codespace. In addition to the repo with which the codespace should be associated, I need to specify the machine type. I could of course fall back to the comfortable UI in the CLI:
; gh codespace create --repo qmacro/codespacetest
ā Codespaces usage for this repository is paid for by qmacro
? Choose Machine Type: [Use arrows to move, type to filter]
> 2 cores, 8 GB RAM, 32 GB storage
4 cores, 16 GB RAM, 32 GB storage
8 cores, 32 GB RAM, 64 GB storage
16 cores, 64 GB RAM, 128 GB storage
but I wanted to be able to use the --machine
parameter directly. But for that I needed to know what value to specify for that parameter.
With the Codespace Machines section of the GitHub API, I can find this out.
Making this call (note the API path includes the repo specification, underlining that relationship I mentioned earlier):
; gh api /repos/qmacro/codespacetest/codespaces/machines
returns this JSON dataset:
{
"machines": [
{
"name": "basicLinux32gb",
"display_name": "2 cores, 8 GB RAM, 32 GB storage",
"operating_system": "linux",
"storage_in_bytes": 34359738368,
"memory_in_bytes": 8589934592,
"cpus": 2,
"prebuild_availability": null
},
{
"name": "standardLinux32gb",
"display_name": "4 cores, 16 GB RAM, 32 GB storage",
"operating_system": "linux",
"storage_in_bytes": 34359738368,
"memory_in_bytes": 17179869184,
"cpus": 4,
"prebuild_availability": null
},
{
"name": "premiumLinux",
"display_name": "8 cores, 32 GB RAM, 64 GB storage",
"operating_system": "linux",
"storage_in_bytes": 68719476736,
"memory_in_bytes": 34359738368,
"cpus": 8,
"prebuild_availability": null
},
{
"name": "largePremiumLinux",
"display_name": "16 cores, 64 GB RAM, 128 GB storage",
"operating_system": "linux",
"storage_in_bytes": 137438953472,
"memory_in_bytes": 68719476736,
"cpus": 16,
"prebuild_availability": null
}
],
"total_count": 4
}
I decided on a minimal footprint codespace with basicLinux32gb
and also specified that it should be deleted soon (1h) after being shut down, and it was created in a matter of seconds:
; gh codespace create \
--repo qmacro/codespacetest \
--machine basicLinux32gb \
--retention-period 1h
ā Codespaces usage for this repository is paid for by qmacro
potential-space-pancake-g4x4j75vg2vr42
And there it is:
; gh codespace list
NAME DISPLAY NAME REPOSITORY BRANCH STATE CREATED AT
potential-sp... potential... qmacro/co... main Available about 4 m...
A codespace is essentially a dev container, and there's a default image from which such dev containers are instantiated when a codespace is summoned into being. It's possible to specify a different definition via a custom devcontainer.json
definition to which you can point via the --devcontainer-path
option for the gh codespace create
invocation, but I didn't do that here.
One of the reasons I didn't do that is I wanted to take the happy and simple path. Another reason though was what I read in the SSH into a codespace section of the aforementioned document (bold emphasis mine):
Note: The codespace you connect to must be running an SSH server. The default dev container image includes an SSH server, which is started automatically. If your codespaces are not created from the default image, you can install and start an SSH server by adding the following to the
features
object in yourdevcontainer.json
file:"features": {
// ...
"ghcr.io/devcontainers/features/sshd:1": {
"version": "latest"
},
// ...
}
OK, now to connect, using the ssh
subcommand of gh
's codespace
command.
First, what's the full name of the codespace?
; gh codespace list --json name
[
{
"name": "potential-space-pancake-g4x4j75vg2vr42"
}
]
OK, let me save that for future reference in this shell session ...
; export CODESPACE=$(gh codespace list --json name --jq first.name)
... and now connect:
; gh codespace ssh --codespace $CODESPACE
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 6.2.0-1018-azure x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
@qmacro ā /workspaces/codespacetest (main) $
Well that was easy!
I feel immediately at home, for a number of reasons. First, the basics:
@qmacro ā /workspaces/codespacetest (main) $ echo $SHELL; uname -a; cat /etc/lsb-release
/bin/bash
Linux codespaces-30d4d8 6.2.0-1018-azure #18~22.04.1-Ubuntu SMP Tue Nov 21 19:25:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
@qmacro ā /workspaces/codespacetest (main) $
Second, the repo is available in a familiar place, in a /workspaces/
directory, following the pattern we see in VS Code when opening a container, for example.
Third, there are familiar and useful tools that I use every day:
@qmacro ā /workspaces/codespacetest (main) $ type vi gh jq git curl
vi is /usr/bin/vi
gh is /usr/bin/gh
jq is /usr/bin/jq
git is /usr/local/bin/git
curl is /usr/bin/curl
So the container (err, codespace) has gh
? I wonder if ...
@qmacro ā /workspaces/codespacetest (main) $ echo $GITHUB_TOKEN
ghu_gVPiEjCkhwDgr[...]
Yes!
Nice - looks like it's time to explore! I can issue a gh codespace stop
(and gh codespace delete
if I don't want to wait that 1h I specified earlier) when I'm done.
You may also be interested in Developing CAP in containers - three ways and also the general containers tag. And of course, you should visit containers.dev for much more on this area.
A note on capitalisation of "codespace". While the product name is "GitHub Codespaces" where the word is plural and capitalised, GitHub documentation refers to codespaces themselves with a lowercase "c", so I have tried to do that too here.
]]>I was fortunate to be able to study until I was 21, before starting my real work life. At school, the 'A' level subjects I chose were Latin, Ancient Greek and Ancient History. I was very lucky to be able to continue on that curve at university, reading Classics ... predominantly Latin and Ancient Greek, with an emphasis on language rather than literature, plus a module in Sanskrit and one in Philology.
A strong interest in all things grammar, syntax and language, combined with this opportunity to study this in depth, has left me with a passion for accuracy and precision in language, as well as a love of etymology and semantics.
This passion has stayed with me throughout my life, into my career, which has been in software, specifically enterprise software in the SAP world. I don't think anyone would argue that with software - whether that be programming, defining specifications, testing, or any number of other related disciplines - accuracy and precision is the order of the day.
Sloppiness or inattention to detail, a lack of precision, call it what you want - is not conducive to success in the software world. Worse still, perhaps - it hinders communication and clarity in discussion.
Those that don't care too much about attention to detail may counter with the claim that this is pedantry - being excessively concerned with formalism, accuracy and precision. To that I say that if passion for attention to detail is pedantry, I'm a pedant and proud of it.
So it is driven by this passion that I cast my eye with an attention to detail on everything I create, and on everything I read. One particular instance, that provokes a perhaps unreasonable level of frustration in me, is to see a lack of attention to detail when it comes to using terms related to blogging.
When the Web was growing up, a large part of the social infrastructure that emerged was a vast and loose collection of Web sites where folks published their thoughts. Updates. Articles. Short and long form prose.
In doing this, they were logging their thoughts as time went by. For any given person doing this, you could go to read what they had to say, and it was usually presented in reverse chronological order. Moreover, you could subscribe to what they had to say via RSS (and latterly Atom) feeds, to have their updates come to you and presented in a so-called feed reader.
These Web sites where folks logged their thoughts became known as Web logs. Then Web log soon became Weblog in a sort of (but not quite) portmanteau. Shortly after, Weblog was often shortened to 'blog, with a leading apostrophe to indicate letters had been omitted.
And over time, just as it happened to telephone -> 'phone -> phone in the past, for example, even the leading apostrophe was dropped, resulting in just blog.
And articles presented and available on such a Web log, on a blog, were referred to as blog posts, or simply posts.
To take this blog as an example, the Web log (blog) itself, containing all posts, is at https://qmacro.org/blog/. Individual posts in the blog can be found at their specific URLs, such as:
There's a clue in the path section of each of these URLs.
In addition, the machine readable format of this blog, which can be used in feed readers to subscribe to the blog and automatically receive new posts, is in Atom format and is available at https://qmacro.org/feed.xml.
Perhaps a (massively simplified) diagram would help?
+-- blog (HTML)
|
V
+------------------------------------------------------+
| DJ Adams |
| |
| 15 Jan Developing CAP in containers - three ways ---- post
| 09 Jan Battlestation 2024 ---- post
| 09 Jan A simple jq repl with tmux, bash, ... ---- post
| |
| ... |
| |
+------------------------------------------------------+
feed (XML) --+
|
V
+-------------------------------------------------------+
| feed xmlns="http://www.w3.org/2005/Atom" |
| | |
| +- entry |
| | +- date 15 Jan |
| | +- title Developing CAP in containers - three ways |
| | |
| +- entry |
| | +- date 09 Jan |
| | +- title Battlestation 2024 |
| | |
| +- entry |
| | +- date 09 Jan |
| | +- title A simple jq repl with tmux, bash, ... |
| | |
| +- ... |
| |
+-------------------------------------------------------+
If the feed XML looks familiar, it should be. Atom, specifically the XML-based Atom syndication format, which is an open standard (RFC4287), along with the Atom publishing protocol (RFC5023), formed the basis of what became OData. Take a look at this Northwind OData V2 entityset and you may or may not be surprised to see that it is an Atom feed, just like the feed of this blog!
<feed xml:base="https://services.odata.org/V2/Northwind/Northwind.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">
<title type="text">Products</title>
<id>https://services.odata.org/V2/Northwind/Northwind.svc/Products</id>
<updated>2024-01-22T10:20:20Z</updated>
<link rel="self" title="Products" href="Products" />
<entry>
<id>https://services.odata.org/V2/Northwind/Northwind.svc/Products(1)</id>
<title type="text"></title>
<updated>2024-01-22T10:20:20Z</updated>
<author>
<name />
</author>
<link rel="edit" title="Product" href="Products(1)" />
<link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Category" type="application/atom+xml;type=entry" title="Category" href="Products(1)/Category" />
<link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Order_Details" type="application/atom+xml;type=feed" title="Order_Details" href="Products(1)/Order_Details" />
<link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Supplier" type="application/atom+xml;type=entry" title="Supplier" href="Products(1)/Supplier" />
<category term="NorthwindModel.Product" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" />
<content type="application/xml">
<m:properties>
<d:ProductID m:type="Edm.Int32">1</d:ProductID>
<d:ProductName m:type="Edm.String">Chai</d:ProductName>
<d:SupplierID m:type="Edm.Int32">1</d:SupplierID>
<d:CategoryID m:type="Edm.Int32">1</d:CategoryID>
<d:QuantityPerUnit m:type="Edm.String">10 boxes x 20 bags</d:QuantityPerUnit>
<d:UnitPrice m:type="Edm.Decimal">18.0000</d:UnitPrice>
<d:UnitsInStock m:type="Edm.Int16">39</d:UnitsInStock>
<d:UnitsOnOrder m:type="Edm.Int16">0</d:UnitsOnOrder>
<d:ReorderLevel m:type="Edm.Int16">10</d:ReorderLevel>
<d:Discontinued m:type="Edm.Boolean">false</d:Discontinued>
</m:properties>
</content>
</entry>
If you're interested in learning more about OData and its origins, see Monday morning thoughts - OData.
So with that, I hope it's plain to see that blog means the entire collection of posts. It does not mean, and never has meant, an individual post. The difference is not difficult, nor is it arcane or something that requires advanced study to understand.
So I exhort you to please use the right terminology. Using blog to refer to an individual post is like referring to an entire magazine and all its issues to refer to an individual article, on a specific topic, with a specific title, published on a specific date.
Some folks may raise the point about language evolving. Of course language evolves. That is not the issue here. The issue is that there is a perfectly good word for an article (post, or blog post) and that means we still have a word with which we can refer to the entire Web log (collection of posts), and that is blog.
Using the word blog incorrectly, i.e. to refer to an individual article (a post) confuses things, pushes us into a situation where we no longer have a word to refer to the whole, and, well yes, makes you look as though your attention to detail is somewhat lacking.
]]>https://www.youtube.com/watch?v=gu5r1EWSDSU
In it we went through what's required for local development, following the guide in the Jumpstart Development section of Capire ("Capire" is the friendly name given to the CAP documentation).
Over the years I've gone through many laptop and desktop machines, with different operating systems, and installed plenty of tools. Up until recently I've also fairly regularly wiped the operating system completely on those machines and reinstalled everything, or at least reinstalled the tools that I still needed. This was because over time my machines got full of cruft and slowed me down, because I didn't always have the right version of, say, Python, or I needed to run multiple different versions of Node.js*, or something that I'd previously installed was preventing the install of something new that I needed.
* Yes I know about tools like nvm, and similar ones for managing multiple versions of other languages and tools, and I've used many of them. But they have always felt like a sticking plaster, rather than a solution. While I'm on this aside, I came across asdf recently which goes one step further in an attempt to be "one version manager to rule them all". But I'm still only half convinced.
Bottom line is that as I move from project to project, from one tool requirement to another, from one language or language version to another, there's inevitably a wake of install froth, a trail of untidy and unwanted software on my machine. There has to be a better way.
That better way is dev containers.
Working inside a dev container gives me everything I need in a concise package, that I can turn on and off, start up and shut down, mess up and recreate, and share with others so that they have exactly the same environment (and versions of tools and runtimes) as me. What's more, while in most folks' ideal scenario, those dev containers run on their local machine, they're independent, and also I often run dev containers on remote machines and connect to them from my local machine.
Put simply, developing in a dev container is a great way to:
The list of install requirements in the Jumpstart Development section looked like an ideal situation for a dev container.
Especially given the context of folks joining the live stream episodes and wanting to play along, often on laptops that they don't have admin access to to install stuff, or on machines they simply don't want to install anything else on.
I see this often in the SAP CodeJams that we run; attendees' corporate laptops are often locked down, and it can be that a significant amount of time is spent at the start of the day just trying to get stuff installed, fighting with policies, or with the operating system, access rights, or even simply badly behaved install mechanisms. All that before getting to the real content of the day is distracting, tedious and not what anyone wants especially at the start of a new learning journey.
In this post I describe three ways to develop with CAP Node.js in a container context.
Two of those ways involve a dev container definition. So let's look at that first.
Given the install requirements, I created a small repo containing essentially of a .devcontainer/
directory, following the containers.dev approach, which, while it started as a mechanism in VS Code specifically is now an open standard*.
* Another example of something great, and open, that many of us use in our own editor environments, but started life originally in VS Code, is the Language Server Protocol, which I use in my Neovim-based editor setup. More on that another time, perhaps.
The entire content of the repo looks like this:
../capb2b
|-- .devcontainer
| |-- Dockerfile
| `-- devcontainer.json
`-- README.md
The contents of devcontainer.json
are as follows:
{
"name": "Back to basics - SAP CAP - dev container",
"build": {
"dockerfile": "Dockerfile",
"args": { "VARIANT": "20" }
},
"customizations": {
"vscode": {
"extensions": [
"sapse.vscode-cds",
"dbaeumer.vscode-eslint",
"humao.rest-client",
"qwtel.sqlite-viewer",
"mechatroner.rainbow-csv"
]
}
},
"forwardPorts": [ 4004 ],
"remoteUser": "node"
}
This describes what VS Code should do with regards to opening the contents of the directory, which is, briefly:
Dockerfile
VARIANT
to 20
http://localhost:4004
)node
** This is rather than the user root
, which is less desirable, for security reasons. Note that the base image for the container needs to have this node
user already created (and in the case of the base image here, it is, as can be seen from its definition).
The dev container image itself is described in Dockerfile
thus:
# syntax=docker/dockerfile:1
ARG VARIANT="20"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# Install some generally useful tools
RUN apt-get update && apt-get -y install --no-install-recommends curl git sqlite3
# Install SAP CAP SDK globally
USER node
RUN npm install -g @sap/cds-dk
WORKDIR /home/node
The base image from which this one is made is a Node.js Development Container Image, specifically (via the default value for the VARIANT
arg):
mcr.microsoft.com/devcontainers/javascript-node:20
The 20 refers to the Node.js major version. This VARIANT
is the same build argument referenced in the devcontainer.json
file.
So going back to the install list in the Jumpstart Development section, that's Node.js taken care of.
On top of that base some core tools are installed:
git
, for source code control (also listed)sqlite3
, because at some stage we'll probably want to look inside the files that are used as the default SQLite-based persistence store in developmentcurl
, of course ("because curl")Finally the CAP development kit, in the form of the NPM package @sap/cds-dk, is installed (globally, with the -g
option, but remember that's just globally within a given container image). This gives us access to the cds
command, which is a multi-faceted tool essential for CAP development.
The only other item on the list is "Java and Maven" but of course that's not relevant here for Node.js flavoured CAP.
So that's everything needed.
As I mentioned before, two approaches involve this dev container definition. One is local, the other one is remote. I'll describe the local approach first.
VS Code is available for pretty much all platforms. As is Docker Desktop. And, in the continued context of avoiding installation of tools locally, I would say that these two are exceptions. I mean, my main editor is of course (Neo)vim, inside my terminal-based IDE but if I were to use a more graphical IDE I'd want it running locally and directly on the host OS (as I like my terminal emulator to run locally). And I see Docker Desktop in a similar way to how I see a VM manager like VirtualBox or VMWare Fusion, for example. An extension of the host OS.
Anyway, I digress, and not for the first time. Talking of digressions, I'm not going to go into Docker Desktop, licencing, and alternatives (such as Podman Desktop) here. That's a topic for another time.
So, here's how it works. With the contents of the repo locally, I start VS Code locally on my macOS host, and open that directory. And this is what happens:
You can see that:
.devcontainer/
directory within the directory just opened, and consequently offers the "Reopen in Container" option.devcontainer/
directory is created and VS Code connects to it* Yes, this description is from the name
property in the devcontainer.json
file.
After opening a terminal in VS Code, we can see the shell prompt which indicates that:
node
git
, curl
and cds
commands are availableBy inspecting various Docker resources, we can see what's happened behind the scenes.
First, there's a new image vsc-capb2b-main-72ec...
:
; docker image ls
REPOSITORY TAG IMAGE ID
newdev latest 117d08c98a2b
vsc-capb2b-main-72ec00... latest dd71ceb116fb
codejam latest 7fb76bf1a160
This image is indeed the one built according to the instructions in .devcontainer/
as we can see from the image's metadata:
; docker image inspect dd71 \
| jq 'first.Config.Labels["devcontainer.metadata"] | fromjson'
[
{
"...": "...",
},
{
"customizations": {
"vscode": {
"extensions": [
"sapse.vscode-cds",
"dbaeumer.vscode-eslint",
"humao.rest-client",
"qwtel.sqlite-viewer",
"mechatroner.rainbow-csv"
]
}
},
"remoteUser": "node",
"forwardPorts": [
4004
]
}
]
From this image a container has been created, and to which VS Code has connected:
; docker container ls
CONTAINER ID IMAGE COMMAND CREATED
16c6e1a6b28f vsc-capb2b-main-72ec00... "/bin/sh -c 'echo Coā¦" 5 minutes ago
0b286ef3c7cf newdev "tmux -u" 2 days ago
f29a13b910be alpine/socat "socat tcp-listen:23ā¦" 2 days ago
There's plenty of interesting detail to see when we inspect this container, but for now let's just limit it to having a look for mounts:
; docker container inspect 16c6 | jq 'first.HostConfig.Mounts'
[
{
"Type": "bind",
"Source": "/Users/I347491/work/scratch/capb2b-main",
"Target": "/workspaces/capb2b-main",
"Consistency": "cached"
},
{
"Type": "volume",
"Source": "vscode",
"Target": "/vscode"
}
]
We can see we have a bind mount of the directory we opened in VS Code, i.e. the capb2b-main/
directory, which makes sense, as we want to access resources in there from within the container.
+---------------------------------- host (macOS) -------+
| /Users/ |
| | |
| +- I347491/ |
| | |
| +- work/ |
| | |
| +- scratch/ |
| | |
| +-------- +- capb2b-main/ |
| | | |
| | +- .devcontainer/ |
| | +- README.md |
| | |
| bind mount |
| | |
| | |
| | +------------- container (Linux) --+ |
| | | /workspaces/ | |
| | | | | |
| +------------> +- capb2b-main/ | |
| | | |
| +----------------------------------+ |
| |
+-------------------------------------------------------+
Moreover, it wouldn't make sense to store our CAP app resources inside the container as they'd be lost if the container was removed. Notice that the value of the Target
property in this bind mount, /workspaces/capb2b-main
, matches the directory we're in, shown in the shell prompt in the terminal in VS Code:
node ā /workspaces/capb2b-main $
There's another mount, this time a volume mount. This has caused a volume to be created, which we can see thus:
; docker volume ls
DRIVER VOLUME NAME
local vscode
But what's in it? Well, we can see where this volume is mounted (in the Target
property), so, in the terminal still running in VS Code, let's have a brief look:
node ā /workspaces/capb2b-main $ tree -L 3 /vscode
/vscode
āāā vscode-server
āāā bin
ā āāā linux-arm64
āāā extensionsCache
āāā dbaeumer.vscode-eslint-2.4.2
āāā humao.rest-client-0.25.1
āāā mechatroner.rainbow-csv-3.11.0
āāā qwtel.sqlite-viewer-0.3.13
āāā sapse.vscode-cds-7.5.0
4 directories, 5 files
That makes sense - it looks like VS Code uses this volume to store VS Code specific resources, such as the VS Code server components, and a cache for the extensions. Nice!
So at this point we can happily develop CAP apps and services, on our local machine, without having to have installed any CAP specific or peripheral tools locally on the host. We're inside a container, but the files we create in building our app are safe on the host-local file system.
We can see this as follows. If we initiate a new CAP project in the container, in the terminal inside VS Code, then we end up with a new directory containing the core files and directories for a Node.js CAP project (app
, srv
and db
directories, plus a package.json
file):
node ā /workspaces/capb2b-main $ cds init bookshop
Creating new CAP project in ./bookshop
Adding feature 'nodejs'...
Successfully created project. Continue with 'cd bookshop'.
Find samples on https://github.com/SAP-samples/cloud-cap-samples
Learn about next steps at https://cap.cloud.sap
node ā /workspaces/capb2b-main $ cd bookshop/
node ā /workspaces/capb2b-main/bookshop $ ls -l
total 8
drwxr-xr-x 2 node node 64 Jan 15 09:46 app
drwxr-xr-x 2 node node 64 Jan 15 09:46 db
-rw-r--r-- 1 node node 348 Jan 15 09:46 package.json
-rw-r--r-- 1 node node 675 Jan 15 09:46 README.md
drwxr-xr-x 2 node node 64 Jan 15 09:46 srv
And outside of the container, in the host macOS environment, we can also see these files locally:
~ % cd ~/work/scratch/capb2b-main
capb2b-main % ls -l bookshop
total 16
-rw-r--r--@ 1 I347491 staff 675 15 Jan 09:46 README.md
drwxr-xr-x 2 I347491 staff 64 15 Jan 09:46 app
drwxr-xr-x 2 I347491 staff 64 15 Jan 09:46 db
-rw-r--r-- 1 I347491 staff 348 15 Jan 09:46 package.json
drwxr-xr-x 2 I347491 staff 64 15 Jan 09:46 srv
Great!
So now we've seen the local approach with this dev container definition, it's time to turn to the remote approach. This approach doesn't need VS Code, nor does it need Docker Desktop. All you need is a browser (tho you may be pleasantly surprised to learn that you can actually combine these two approaches - read on until the end to find out how).
The magic of this approach lies in GitHub Codespaces.
In the good old days, given a repository on GitHub, all the options to work with it involved downloading the repository content somehow (via git
or a simple ZIP file download).
These days, all those options are now collected within a tab labelled "Local", and there's also now another tab labelled "Codespaces" which offers the ability to create a working environment specificially for, and containing the content of, the repo.
Here is what happens:
Beyond not needing anything more than a browser, the crazy thing is that this is pretty much exactly the same as what we see with the VS Code and Docker Desktop approach, except that everything is remote:
.devcontainer/
The upshot is that we have pretty much the same environment as we have in our local approach!
Talking of simple contexts where all we need is a browser brings me onto the third approach. This is also very similar to the GitHub Codespaces approach, in that it's all remote and all you need is a browser.
The SAP Business Application Studio (BAS) is an IDE in the cloud. With BAS, you can manage your projects in one or more so-called Dev Spaces, similar to Codespaces, and there are different Dev Space setup flavours available depending on your development project requirements.
Here's an example of the creation of a Dev Space (choosing the "Full Stack Cloud Application" flavour in the setup means that all the tools we need for CAP development will be available) and the cloning of the repo ready to start CAP development:
Interestingly, since autumn last year, the underlying tech for Dev Spaces in BAS has been Code - OSS, the open source flavour of VS Code. This is why the three approaches (VS Code locally, GitHub Codespaces remotely, and now here with a BAS Dev Space) all look the same. That's because effectively they are the same, underneath, from an IDE perspective.
So here again is another way to develop CAP apps in what is effectively a container. The actual underlying mechanism may be slightly different, but the effect is the same, in that you don't have to install anything locally on your host machine.
In the section on GitHub Codespaces earlier, I mentioned in passing that it is actually possible to combine the local and remote approaches. Before I finish this post, I'll show that in action here.
Given a GitHub Codespace, it's not only possible to open that in the browser, as we saw earlier, but it's also possible to connect to it ... from VS Code running locally on your host. This is both obvious when you think about the underlying technology in use here, but at the same time it's sort of mind blowing that this is a thing. At least to me. Anyway, here it is in action:
What's happening here is that there's a GitHub Codespaces extension that I also now have installed in my local VS Code. This works in a similar way to the Dev Containers extension. In order for this to work the first time I tried it, I had to go through an authentication step which securely connected my VS Code to GitHub, via the GitHub for VS Code connected application (you don't see this step in the video above).
Visually it's almost the same too, except for the fact that the after the remote indicator (in the bottom left of VS Code) shows "Opening remote", the final status is slightly different:
* The generated name of the Codespace is "redesigned potato".
Of course, you can also start the connection from within VS Code, i.e. reach out to the remote container (Codespace) using the extension. Here's a quick demo of that in action:
All in all, the possibilities of container based development are, in my humble opinion, pretty excellent. I am also keeping a close eye on the containers.dev initiative plus various related projects, such as DevPod, which I learned about in the Open Source Dev Containers with DevPod live stream episode on Bret Fisher's YouTube channel.
If you want to get started developing CAP apps and services, I would strongly recommend you look into using a container based setup, and take any one of the approaches outlined here.
Happy developing!
]]>It's in the office space on the narrowboat that you can see here near the centre of the cabin (from the post Working from a narrowboat - Internet connectivity):
My main work machine is a MacBook Air M2 from 2022, which is great. I'm running the Chromebook tablet (Lenovo IdeaPad Duet 3) to monitor Docker containers (via a remote ssh-based context, in a similar way to how I set up remote access to Docker on my Synology NAS) and also I use it for controlling my live streams via YouTube Studio, where I can monitor the chat separate from my main display.
The space is perfect, and just behind the wooden bulkhead is the stove, so it's nice and warm here in the office as well as in the saloon and galley.
]]>When it comes to exploring and processing JSON data, jq is my goto language. And for exploring, I will either just want to browse the entire JSON dataset, or filter with simple jq expressions.
For browsing, I have a simple script jl (short for "jq | less") which just uses the appropriate options to send a colourised pretty printed version of the JSON through less
:
#!/usr/bin/env bash
# JSON less
# This is a script so that I can use it in lf
jq -C . < "$1" | less -R
For exploring, I often reach for interactive jq, which I love. There are similar Web-based tools, and both ijq
and these Web-based tools follow the same three-window pattern:
Most of these tools only give you a single filter line to enter your expression. For 80% of the time, that's perfectly fine, and I find myself enjoying the ease of exploration that ijq
offers (see these posts on ijq).
Sometimes though I find myself wanting a bit more space in the "filter" window, so I can use it more as a REPL, or at least as a multi-line filter editor, which, as a bonus, then gives me great syntax highlighting thanks to this TreeSitter plugin for jq and linting thanks to this jq LSP server.
My "IDE" is my terminal, and I run bash
shells inside tmux
(inside a dev container) and make use of the command line and all that the UNIX pipeline and large set of small tools has to offer. To that end, I have cobbled together a quick "jq tmux based explorer" jqte which I can use to explore a JSON dataset.
Here's a short demo of it in action. The context in which I invoke jqte products.json
is my IDE, i.e. I'm within my main tmux
session:
It opens up a new tmux
window with three panes (1, 2 and 3), as also described in the script itself:
+---------------------+----------------------+
| output | original JSON data |
| | |
| | |
| | |
| 1 | 2 |
| | |
| | |
| +----------------------|
| | filter |
| | |
| | 3 |
| | |
+---------------------+----------------------+
I'm placed directly into the "filter" pane which is a neovim
session editing a temporary file that represents my jq filter, where the identity function .
is specified as a starting point.
Every time I modify the filter the entire jq filter is applied to the source JSON and the result is displayed, in the "output" pane. This is achieved by the combination of a vim autocmd
and the super useful entr.
When I'm done, I can close the entire jqte
session using tmux
's normal "kill-window" facility which I invoke with <prefix>&
.
I use dev containers everywhere. I hardly ever work outside of a container on, say, my work laptop (a 2022 Apple MacBook Air M2). Instead, I do everything, or as much as I can, in dev containers.
I'm a command line kind of person, happiest and most productive in a terminal session running the Bash shell, but despite that, I haven't installed any CLI tools locally, not even Homebrew. Heck, I haven't even taken the time to fix the issue with macOS shipping with the zsh as default these days. For me, the host OS is secondary.
I've found that the only commands I issue in the shell on my laptop's OS have been docker
commands, to build, start, stop and generally manage my images and containers. I've done this outside my normal dev container based working environments partly because sometimes I have to recreate those working environments, and partly because access to the Docker engine itself (via Docker Desktop) is, at least on a macOS device, a little difficult.
That's because of how Docker is provided on macOS (and Windows) machines generally, which is in a Linux virtual machine. While it's possible to, say, create a bind mount, connecting the Docker engine socket exposed from that VM on the macOS host OS at /var/run/docker.sock
to a container, it's fraught with difficulties and I've really only had success when the user inside the container is root
. It's all to do with permissions combined with the extra layer of indirection brought about by the fact that the Docker engine is actually running in a Linux VM rather than natively on the macOS host.
From a productivity perspective the "last mile" for me is to be able to do more with Docker from within my dev containers. In other words, run the docker
client command, which of course needs to be connected to the Docker engine, and I think I've found a solution that feels clean to me, and doesn't involve bind mounting /var/run/docker.sock
nor trying to hammer permissions into shape with a big mallet.
While it may not be the most compact approach, it works for me, and is also teaching me more about topics I want to dig deeper into anyway, including Docker networking and container-to-container communication.
The solution is based on some great tips from Ćkos TakĆ”cs in one of a myriad discussions on the interwebs about the very problem I've described: Permission for -v /var/run/docker.sock.
It involves running socat. The man page describes it as "a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes". In other words, a "multipurpose relay" or "so[cket] cat".
In this case, socat
is used to relay /var/run/docker.sock
, which is the UNIX socket on which the Docker engine is listening, making it available via TCP.
That TCP availability, in the form of a <hostname>:<port>
socket, is to be provided by a container running that socat
process, in the context of a Docker bridge network, to which the dev container is also connected.
And through the magic of Docker networking, that TCP socket is then made available to the dev container, and being a TCP socket rather than a file-based UNIX one, we've moved away from any permissions issues. Moreover, that exposure is solely within that specific bridge network.
Here's what this solution looks like.
+-------------------------------------------------host--+
| |
| /var/run/docker.sock |
| ^ |
| | |
| bind mount |
| | |
| +--------|--------------------------devnet--+ |
| | | | |
| | v | |
| | +-----socat-+ +-------dev-+ | |
| | | 3275 |-------| | | |
| | | | | | | |
| | | socat | | tmux/bash | | |
| | | | | | | |
| | | user:root | | user:user | | |
| | +-----------+ +-----------+ | |
| | | |
| +-------------------------------------------+ |
| |
+-------------------------------------------------------+
Setting this up manually and step by step can help illustrate each component part.
Containers running in the same network can communicate with one another. Containers with names that have been specified explicitly can be reached via that name within the network that they share. This is a great combination of features that I can make use of.
I create a new network called "devnet" like this, being explicit about the type of network (which is default, but it doesn't harm to be explicit):
$ docker network create --driver bridge devnet
969a73295ceeedaf63848c5f7aa4993895a0893e3607e067af347eb3b2d83dc2
Now I can see it in the list of all networks:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
d8d6d3ae3737 bridge bridge local
969a73295cee devnet bridge local
ee4841fdb953 host host local
b6ce4554097e none null local
So far so good.
Now for the container that will provide the socat
-powered relay between the UNIX socket at /var/run/docker.sock
on the host, and a TCP socket that will be available in the "devnet" network. There's an Alpine based container that is perfect for this.
The key aspects of this container are that it should:
I'll give this container the name "socat", and conjure it up like this:
$ docker run \
--name socat \
--network devnet \
--volume /var/run/docker.sock:/var/run/docker.sock \
--detach \
alpine/socat tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock
5c3f153671e7b292e156bd5a5de9c5539aad544c2eea6466303a1623584d4a66
Great. At this point, I have the socket relay mechanism running, and within the "devnet" bridge network, that relay's TCP endpoint is available at socat:2375
. Note that it's not generally available at the host level, which is a good thing of course; running netstat -atn | grep 2375
gives no output.
Now all I need to do is to start up an instance of my dev container image (which I've recently been revamping, and the image name is currently "newdev"). There are a few things to note about my dev container:
Now I have the docker
client CLI in my dev container, I want to use it to connect to the Docker engine.
But rather than try to connect to it via a bind mount to the /var/run/docker.sock
directly, with all the difficulties that entails, I can now connect to it via a TCP socket, specifically the one that my "socat" container is now making available to other containers in the "devnet" bridge network.
The simplest way to make this default in the dev container is to set the DOCKER_HOST
environment variable to that socket address (socat:2375
) when I instantiate the container.
Here goes:
$ docker run \
--name dev \
--network devnet \
--volume "$HOME/work:/home/user/work" \
--platform linux/amd64 \
--publish "4004:4004" \
--publish "8000:8000" \
--env DOCKER_HOST=socat:2375 \
--tty \
--interactive \
--detach \
newdev
3c39a1084871a5f26988d5ef0008dc3134c6313759600b45ef09f683bd251f57
A couple of notes on two of the options used:
--detach
option when starting my dev container, as I want to jump right into it, but I thought I'd use it here for consistency and also to help with the step by step approach--platform linux/amd64
Now that I have everything set up, I can jump into my dev container and use the docker
CLI as I would normally have had to do in the raw macOS host level context.
First, if I attach to the container, like this: docker attach dev
I get my working environment, my PDE (Personalised Development Environment) of choice. My dev container, my home on this (and all other) hosts. It's basically tmux
, and I use bash
as my shell. Obviously.
The dev container has all my tools installed, one of which is (now that I can more reliably and properly connect) docker
. First, I can see that the environment variable DOCKER_HOST
is set for me (thanks to the use of that --env
option in the container setup):
; env | grep DOCKER
DOCKER_BUILDKIT=1
DOCKER_HOST=socat:2375
DOCKER_CONFIG=/home/user/.config/docker
The DOCKER_HOST
environment variable is related to the Docker context, which I could set with docker context use but I don't have to now that there's a value set for the variable.
So all my docker
invocations will use the context socat:2375
for where to connect.
And it works!
; docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c39a1084871 newdev "tmux -u" 11 minutes ago Up 11 minutes 0.0.0.0:4004->4004/tcp, 0.0.0.0:8000->8000/tcp dev
5c3f153671e7 alpine/socat "socat tcp-listen:23ā¦" 50 minutes ago Up 50 minutes socat
And I can see that those two containers are indeed in the "devnet" network:
; docker network inspect devnet | jq 'first | .Containers'
{
"34e1703d91894701f9de1211256e7157ef755f35d3884057163827d515b837be": {
"Name": "dev",
"EndpointID": "a12669474c0bd39767a3dabadea0da8f0247f597f8d4dfa5458a344f933dcc47",
"MacAddress": "02:42:ac:1b:00:03",
"IPv4Address": "172.27.0.3/16",
"IPv6Address": ""
},
"5c3f153671e7b292e156bd5a5de9c5539aad544c2eea6466303a1623584d4a66": {
"Name": "socat",
"EndpointID": "a83290a0970b04ea2a3257c2c4187413cc9b035589b2a0cc9927137be704325d",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
}
I did for a second want to check that my access wasn't read-only or something, but no - I can pull images and create new containers, of course, too:
; docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
478afc919002: Pull complete
Digest: sha256:ac69084025c660510933cca701f615283cdbb3aa0963188770b54c31c8962493
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working ...
Not only that, but this hello-world
experiment also lets me confirm one more thing - that containers created don't automatically somehow run in the "devnet" network, and indeed they don't (it wouldn't make sense if they did, but it's always nice to check):
; docker container ls \
--all \
--format json \
| jq -r '"\(.Names) (\(.Image))"'
dev (newdev)
socat (alpine/socat)
priceless_cori (hello-world)
; docker inspect dev socat priceless_cori \
| jq 'map({(.Config.Image):.NetworkSettings.Networks|keys})|add'
{
"alpine/socat": [
"devnet"
],
"newdev": [
"devnet"
],
"hello-world": [
"bridge"
]
}
Setting the network and containers up like this manually is great for a blog post like this, but I'd prefer something a little more automated and declarative. So here's a Docker compose file that sets up the same combination:
services:
socat:
image: alpine/socat
container_name: socat
networks:
- devnet
command: 'tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock'
user: root
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
dev:
depends_on:
- socat
image: newdev
container_name: dev
networks:
- devnet
platform: linux/amd64
stdin_open: true
tty: true
volumes:
- '$HOME/work:/home/user/work'
ports:
- '4004:4004'
- '8000:8000'
environment:
DOCKER_HOST: 'socat:2375'
networks:
devnet:
name: devnet
driver: bridge
Now with a simple docker compose up --detach
invocation, I can have everything up and running as before:
; docker compose up --detach
[+] Running 3/3
ā Network devnet Created 0.0s
ā Container socat Started 0.0s
ā Container dev Started 0.0s
And now I can attach to my dev container as before with docker attach dev
.
Nice!
]]>There are a few reasons why, and some folks may be wondering. So here are those reasons, briefly.
I have been around a long time, taking my first nervous but excited steps onto the Internet back in the days of dial-up, via Compulink Internet Exchange (aka CIX, a very early Internet Service Provider in the UK). I enjoyed access to email, newsgroups (on Usenet) and various resources made available via Gopher and Wais. Those were the days (late 1980s / early 1990s) before the Web, and so I've also seen the Web from its infancy, in all its various guises as it's transitioned to where we are now, which consists of -- for many folks -- a small number of central services owned by very large organisations.
Back in the day, we ran our own websites, maintained and posted on own weblogs, grew ad hoc federation via simple webrings, linkback mechanisms, RSS (and latterly Atom) feeds and more. They were happier, simpler times, bursting with potential, and times where we had more ownership, control & responsibility for our own content and how it connected to other content.
To that point, while I've been blogging for decades (on my own blog), I was also "microblogging" on another platform before I hopped onto Twitter in early 2007. That platform was Identica. And it was open and federated.
The attraction of the federated nature of Identica is here again, and much stronger, with the Fediverse, "an ensemble of social networks which can communicate with each other, while remaining independent platforms". There's a standard, an open decentralised social networking protocol, called ActivityPub, which powers this open interconnectivity, and Mastodon is a microblogging platform that supports ActivityPub and plays nicely in this (relatively) new exciting world.
Incidentally, a co-author of ActivityPub, Evan Prodromou, was a key actor in the creation of Identica and the technology behind it.
For me, Twitter's API story has been complex and beset with change. At heart I'm a tinkerer, a builder, a hacker (in the proper sense of the word), a developer. So an API to a platform I use is an important aspect that makes that platform more attractive to me. Perhaps worse than a platform that has no API to begin with is a platform that had a great API ... which then eventually is made unavailable for most folks.
That's what's happened with Twitter. That not only kills off any cool integrations and hacks, but it also suppresses any thoughts or interest in building more stuff too. As a developer, I feel that my content and interaction on the platform is no longer wanted. I'm fine with that, it's not my platform. But it also means I don't have to stay.
One example of a very simple integration that I use already on Mastodon is a mechanism that toots notes on, and the URLs for, articles I read and find interesting. Here are a couple of examples, on a post about an e-reader setup and on tools and how we maintain them.
I used to have that running on Twitter, but because of the suppression of innovation and the removal of access to the API, that doesn't work any more on that platform. If you're interested in seeing this simple mechanism, see the GitHub Actions workflows in my URL notes repository.
Not only does Mastodon have an open API, but the potential with ActivityPub is enormous, too.
I was a big user of Tweetdeck, which gave me a great way to organise my consumption of, and interaction with, content on Twitter. With the recent changes, that access to Tweetdeck has gone. Moreover, my timeline is blurred with adverts, and content that is "suggested" to me, in an order that is sometimes confusing too.
With Mastodon, things are simpler, more straightforward, and not polluted with stuff for which I didn't ask.
A side effect of what's happened to timeline content on Twitter, and how it compares with the equivalent on Mastodon, is that the contrast between the two platforms, and how personal and friendly they are, has been accentuated for me. Yes, that is partly due to the smaller numbers of folks on Mastodon, and the fact that those that are there -- the early adopters -- are perhaps more concerned with growing the platform organically and in a fashion that conveys and encourages friendship and kindness.
While I haven't personally been exposed to much of the less savoury content on Twitter, it is by all accounts not only very much there, but growing and becoming ever more vitriolic in some corners.
Perhaps that's an unavoidable side effect of Twitter being a single, gigantic, central, ungoverned (or ungovernable?) platform. Ultimately the hate comes from humans that are on it, not the platform itself. But that is only half the story, and I don't have the direct experience, or the authority, to talk more about this. Suffice it to say that while it's not been the main factor influencing my decision, it has been a factor.
So there you have it. I wasn't sure what I was going to write in this post; I just opened up my editor and started typing. And it seems that on the whole, the reasons for moving are largely positive rather than negative. I think that's a good basis for the decision.
By the way, if you're interested in getting started with Mastodon, I can recommend Fedi.Tips, "an unofficial guide to Mastodon and the Fediverse".
]]>The way you add a plugin to your Tmux configuration requires you to specify a remote git repository, such as on GitHub or BitBucket. This would mean I'd have to push my fledgling plugin to GitHub to test it out in the context of Tmux itself, but I wanted to keep everything local while I developed it. So I used a bare repository on the local filesystem. Here's how.
The plugins are managed by Tmux Plugin Manager (TPM) and there's a great document on how to create a plugin here: How to create Tmux plugins.
While the plugin itself is essentially a script that uses tmux
commands, including the plugin into your configuration and testing the installation and use in a real session means that you have to have the plugin code in a remote repository. The examples in the main TPM README imply this:
set -g @plugin 'github_username/plugin_name'
set -g @plugin 'github_username/plugin_name#branch'
set -g @plugin 'git@github.com:user/plugin'
set -g @plugin 'git@bitbucket.com:user/plugin'
Nothing wrong with that at all, but I wanted to get the plugin right before pushing it anywhere like GitHub. So I remembered that you can initialise a repository with the --bare
option (see the "Bare Repositories" section in this document), and this will effectively create a shared repository that can be used as a remote.
I was developing the plugin in a directory called:
~/work/scratch/tmux-focus-status/
and had run git init
in there, and committed my work.
I then created a bare repository with the --bare
option, like this:
cd ~/work/remotes/
git init --bare tmux-focus-status.git
The convention is to add the
.git
ending to repositories initialised like this.
Then I set up this location as a remote in my plugin directory:
cd ~/work/scratch/tmux-focus-status/
git remote add local ~/work/remotes/tmux-focus-status.git
Calling the remote
local
might seem a little counter-intuitive, but it works for my brain.
Having pushed the work to that remote:
cd ~/work/scratch/tmux-focus-status/
git push local main
I could then reference that local filesystem remote in my Tmux configuration, alongside my other plugin lines, like this:
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'
set -g @plugin 'christoomey/vim-tmux-navigator'
set -g @plugin '/root/work/remotes/tmux-focus-status.git'
I was doing this in a temporary dev container hence the
/root
home directory.
And on invoking the TPM "install" function (with <prefix> I
, see the keybindings info), the plugin was successfully installed:
Already installed "tpm"
Already installed "tmux-sensible"
Already installed "vim-tmux-navigator"
Installing "tmux-focus-status"
"tmux-focus-status" download success
TMUX environment reloaded.
Done, press ESCAPE to continue.
Excellent!
Using local filesystem based remotes is also nicely summarised in How to use local filesystem remotes with git which I found helpful.
Note: You can also just clone the repo from the directory it's in, but it feels nicer and more organised to bridge this via a "real" git remote connection. Thanks to my son Joseph for pointing this out :-)
]]>Anyway, while I've used Tmux for a long time, I've never really used a plugin manager, so this week I took a look at Tmux Plugin Manager (TPM). It worked really nicely out of the box, but there were a couple of things I wanted to sort out for my particular setup.
The short video Tmux has forever changed the way I write code has a nice overview of Tmux configuration, including the use of plugins with TPM, which I'll use here as an example. The relevant configuration in tmux.conf
looks like this, defining three plugins (well, two plus TPM itself):
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'
set -g @plugin 'christoomey/vim-tmux-navigator'
run '~/.config/tmux/plugins/tpm/tpm'
The TPM command to install the plugins defined in your configuration is <prefix> I
, where <prefix>
is of course the Tmux prefix key, which in my case is Ctrl-space
. Invoking this causes the plugins listed in the configuration to be installed, followed by status output that looks like this:
Already installed "tpm"
Installing "tmux-sensible"
"tmux-sensible" download success
Installing "vim-tmux-navigator"
"vim-tmux-navigator" download success
TMUX environment reloaded.
Done, press ESCAPE to continue.
I don't do much at the native OS level of my laptop (which runs macOS); instead, I work in a Linux-based dev container. My PDE is essentially a Docker image, from which I create my working environment, usually just a single container, which, when it starts up, runs tmux
.
So when I build my image, I'd like to have the Tmux plugins pre-installed, rather than have to go through a manual setup i.e. have to use <prefix> I
when I jump into the container for the first time.
After a bit of digging, I found that I can do this by running the bin/install_plugins
script which is part of TPM.
So here's a simplified extract of my dev container's Dockerfile definition (ignore the use of the root user here, this is from a testing image setup):
ARG SETUPDIR=/tmp/setup
ARG CONFDIR=/root/.config
# Basic setup
RUN mkdir $CONFDIR
RUN mkdir $SETUPDIR
# Tmux
ARG TMUXVER=3.3a
RUN cd $SETUPDIR \
&& curl -fsSL "https://github.com/tmux/tmux/releases/download/$TMUXVER/tmux-$TMUXVER.tar.gz" \
| tar -xzf - \
&& cd "tmux-$TMUXVER" && ./configure && make && make install
# Tmux config, including plugins
RUN mkdir $CONFDIR/tmux \
&& git clone https://github.com/tmux-plugins/tpm ~/.config/tmux/plugins/tpm
COPY tmux.conf $CONFDIR/tmux/
RUN $CONFDIR/tmux/plugins/tpm/bin/install_plugins
# Off we go
CMD ["tmux"]
My basic Tmux config is copied to the configuration directory (COPY tmux.conf $CONFDIR/tmux/
) and then the TPM bin/install_plugins
script is executed.
When I enter the container, and find myself in a new Tmux session (thanks to CMD ["tmux"]
), all the plugins are already installed. Nice!
TPM has a small number of key bindings for plugin management. The default key binding for uninstalling plugins that you've removed from the list in your configuration is <prefix> + alt + u
.
My daily driver is an Apple MacBook Air. One of the (many) "interesting" features of MacBook keyboards, at least with some sort of English layout, is that you can't easily type a #
character. Which is especially frustrating as a developer. To get a #
character you have to use Option-3
which is frankly ridiculous, but I've got used to it over the years.
Anyway, the Option key is the Alt (or Meta) key which means that in order to use <prefix> + alt + u
on this keyboard, I would have to change the terminal settings for the Option key, for it to act as a proper Alt key. But then I wouldn't be able to type #
characters.
Instead, again after a bit of digging, I found that you can change these default key bindings. They're actually defined in a variables.sh file:
install_key_option="@tpm-install"
default_install_key="I"
update_key_option="@tpm-update"
default_update_key="U"
clean_key_option="@tpm-clean"
default_clean_key="M-u"
SUPPORTED_TMUX_VERSION="1.9"
DEFAULT_TPM_ENV_VAR_NAME="TMUX_PLUGIN_MANAGER_PATH"
DEFAULT_TPM_PATH="$HOME/.tmux/plugins/"
This allows me to add a line to my tmux.conf
file to change the binding for the "clean" option (to uninstall plugins) to something different. I chose K
for "(K)lean" (as hitting "C" after the Tmux prefix key is a common action to create a new window):
set -g @tpm-clean 'K'
Now I can uninstall plugins that I've removed from my configuration with <prefix> K
. Here's an example of the uninstall status output, after I removed the line specifying the christoomey/vim-tmux-navigator
plugin from my tmux.conf
file and then hit <prefix> K
:
Removing "vim-tmux-navigator"
"vim-tmux-navigator" clean success
TMUX environment reloaded.
Done, press ESCAPE to continue.
The removal of this plugin was just to illustrate the mechanism; I've just been looking into this plugin and I'll be using it as it's great - especially the extremely comprehensive README!
That's neat. I'll be embracing TPM from now on.
]]>If you've seen it and are curious about it, and want to execute it but don't know how, there are plenty of ways you can do it on the Web. You don't need to install Node.js on your machine if you don't want to.
It's important to realise that while Node.js is JavaScript, it comes with more libraries and features relating to the runtime context that it provides. Think of it as "JavaScript++". So you can't run the Easter Egg code in, say, the Chrome Developer Tools console, for example.
But you can run it on the Web. Here are a few places where this is possible:
replit lets you "build software collaboratively with the power of AI, on any device, without spending a second on setup".
CodeSandbox "keeps you in flow by giving you cloud development environments that resume in 1 second".
RunKit "is a node playground in your browser".
And of course, it almost goes without saying that you can run it in a Dev Space in the SAP Business Application Studio.
So what are you waiting for? Stare at the code, try and work out what it's doing, what it emits, and if you get stuck, run it in one of these Web-based environments and see where it leads you. And have fun!
]]>The Developer Advocates at SAP have been busy over the past few weeks putting together the content for our now much anticipated annual event - Devtoberfest. If you don't know what Devtoberfest is, let me explain:
There's a key activity that's common to any developer, regardless of their area of expertise, interests, and craft. That key activity is learning. The world of development, of software, architecture, operations, and more, is moving at an ever increasing pace, and if there's one thing I've perhaps finally figured out, after over 35 years as a developer in the SAP tech ecosphere, is that there's always more learning to do.
Learning is something we should be doing regularly. It doesn't matter whether the subject matter is brand new, or you're revisiting something you have already had experience with, to go deeper. It doesn't matter if the relevance to your current work tasks is only fleeting, and it certainly doesn't matter how you prefer to learn. Reading, watching videos, completing tutorials, taking part in discussions, asking and answering questions, earning points & badges - each one of these activities helps you to level up. Back in the early 1990's, I was working at the largest SAP R/2 installation in the world, and learned a valuable lesson from one of my colleagues there. That was to make time to read. I recount the story in the blog post Tech Skills Chat with JonERP ā A Follow-on Story, and the key takeaway is: Always Be Reading.
Anyway. Sometimes one learns best alone. And other times, it's great to learn together. And that's what Devtoberfest is all about.
So over the next four weeks, I want to encourage you to make time for yourself as a developer, make time to learn, make time for Devtoberfest. Check out the many, many sessions we have for you over on the Devtoberfest events calendar, focused on five main topics, each of which has a colour code, and each of which falls on the same day each week:
It won't surprise you to realise that these core topics are also the backbone of any great SAP TechEd event too. And that's no coincidence. We want you to be prepared for SAP TechEd by being up to date, with your learning neurons revitalised and ready for more action, and hungry for more knowledge.
So dive in. Get started by heading over to and joining the Devtoberfest group on SAP Community. You'll find plenty of information in the blog post area on how things work, and what to do next.
See you there!
]]>In SAP Developer Challenge - APIs - Task 10 - Request an OAuth access token, my good friend and colleague Daniel stumbled into a problem while conveying OAuth client ID and secret values in a call to curl
. It would have been something like this:
curl '<oauth-authorization-server>' \
-u "<clientid>:<clientsecret>" \
-d "grant_type=<grant-type>"
-d "..."
With many services on the SAP Business Technology Platform, client ID and secret values contain special characters, notably here they contain exclamation marks. Here's an example:
sb-ut-f86082c9-7fbf-4e1e-8310-f5d018dab542-clone!b254751|cis-central!b14
dfad81fe-a33d-4252-b612-d49cd9fd3a42$dE1F7W2F3-TrF9kIrkdQaliGqTKR_aCVcv-oaM7ZZ9x=
They also contain dollar signs.
Bash is a venerable and extremely capable shell, and supports a number of so-called Shell Expansions, where values are substituted for tokens on the command line. This is before those values are then interpreted as part of whatever command is to be executed. These expansions are initiated by special characters, two of which are the dollar sign $
and exclamation mark !
.
The $
character introduces shell parameter expansion, and in the very simplest of cases will substitute the value of a variable identified with the $
character, replacing the parameter or symbol itself. For example, if we have a variable ans
with the value 42
, then:
echo "The answer is $ans"
will emit:
The answer is 42
The !
character introduces history expansion. In the Bash shell, commands are remembered in a history, and can be recalled with the history
builtin. Here's an example of the output from history
:
1977 date
1978 git status
1979 git add .gitignore
1980 git commit -m 'do not track cache files'
If I wanted to rerun the git status
command, I could invoke it like this:
!1978
That might not seem earth shatteringly exciting, but for longer more complex combinations of commands, it can be very useful. Remember also that some shells emit the current history number in the prompt, making it quick and easy to refer to a previous command. And in the older days of slower connections, especially serial terminal connections, the transmission of every character counted!
Shell parameter and history expansion happen inside double quotes. So if I try:
echo "everything!abc"
Then I see this:
-bash: !abc: event not found
The word "event" here refers to a line selected from the history. And as abc
isn't in my history as a reference, the error message makes sense.
But if I were to try:
echo "everything!1978"
I would see:
everythinggit status
One thing to note for those of you working with OData, is that the OData system query options are all prefixed with the dollar sign. For example, there's $top
, $skip
, $expand
and so on. So if you were to use curl
to request a URL like this (elided for brevity):
curl \
--url "https://.../Northwind.svc/Products?$top=2"
then you'd get rather more product entities than you expected. Instead of receiving just the first two, you'd get all of them. Why? Because through shell parameter expansion, the $top
part was expanded into the value of the top
parameter, which is (most likely to be) empty, making the actual URL passed to curl
this:
https:/Northwind.svc/Products?=2"
Nicely, perhaps through Postel's Law, the Northwind service quietly ignores the random =2
which is thus sent as the query string part of the URL, and returns the entire products entity set.
These expansions work within double quotes in Bash. They explicitly and deliberately are not active within single quotes. There is in fact a lot more to know about the difference between single and double quotes in Bash, but all you need to remember for now is that you should only use double quotes when you know you want something magic to happen (such as expansions). If you can get away with using single quotes, then that is often the better way, where the data within remains "passive".
Here are those two examples from earlier, but expressed in single quotes. First, using an exclamation mark which in double quotes would invoke history expansion:
; echo 'everything!abc'
everything!abc
Now using a dollar sign, which in double quotes would invoke parameter expansion:
; echo 'The answer is $ans'
The answer is $ans
The shell is a wonderful environment, but can be arcane and odd around some edges. But how is that different to the univese in general, right?
]]>We're running an SAP Developer Challenge this month, on the topic of APIs. In a discussion relating to Task 2 - Calculate Northbreeze product stock, Wises shared his process and thoughts in a nice reply to the task thread, in which he said, about using curl
:
I found that I have to manually replace blank(space) with %20 in the $filter block to be able to fetch an OData API.
I thought I'd write a few notes on this phenomenon, which may help others, and which is a good opportunity to share some cool curl
features.
One of the many lovely aspects of OData, especially with regards to the query and read operations, is that you can try things out in the browser, because both query operations and read operations are accomplished using the HTTP GET method.
Here's a simple example, related to the topic of Task 2, using the OData V4 version of the Northwind service. Consider a query operation on the Products
entity set to:
If you copy-paste the entire query operation URL into your browser's omnibar:
https://services.odata.org/V4/Northwind/Northwind.svc/Products?$filter=Discontinued eq false&$select=ProductName&$top=3&$orderby=ProductID
and then send that request off, you'll get an appropriate response:
{
"@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Products(ProductName)",
"value": [
{
"ProductName": "Chai"
},
{
"ProductName": "Chang"
},
{
"ProductName": "Aniseed Syrup"
}
]
}
Your browser most likely didn't bat an eyelid at the whitespace in the URL, i.e. the space before and after the eq
operator in the $filter
system query option.
But if you look at what it actually sent to the Northwind server, you'll see that it automatically URL encoded the whitespace:
Spaces, and other special characters, are generally unwelcome in URLs, which are restricted to ASCII, and on top of that, there are reserved characters which have special meaning in the URL structure.
These characters must be URL encoded. This is also known as "percent encoding", because the encoding replaces a character with its corresponding ASCII value, in hex, prefixed with a percent sign.
So in this example, this part of the query string:
$filter=Discontinued eq false
became:
$filter=Discontinued%20eq%20false
because space, while it has a representation in ASCII, and is not one of the reserved characters, is generally not allowed .Otherwise how would processing or even us humans tell when a URL ended?
And of course, the ASCII character code for space is 32 in decimal which of course is 20 in hex.
When you use curl
or similar tools, there's no context in which to automatically and silently modify URLs. At least, I wouldn't want curl
to do that without me asking it to. So if you tried to use curl
to request the URL above, it would send it verbatim. Which would be erroneous, and fail.
At this point, what one would normally do is to pre-empt this failure by properly encoding the URL before giving it to curl
. There are many libraries and utilities to do this, and you could even write your own, it's not complex. Basically, this is the right way to go, to avoid giving bad data to curl
to process.
However, curl
has some lovely features, including the ability to send data with the request. This is normally done using the --data
option, but there's a --data-urlencode
option too, which will URL encode whatever you pass with this option.
Now, typically, one might say normally, these options are used in the case of POST requests, where the data is sent in the body of the request, i.e. in the payload. Often this is in the form of name=value
pairs which usually should be URL encoded, in the context of HTML form submissions, for example (have you ever wondered why the default Content-Type
header value sent by curl
is application/x-www-form-urlencoded
?).
Anyway, OData query and get operations are performed with HTTP GET, not HTTP POST.
But.
We can still make use of --data-urlencode
and still have the system query options (such as our $filter
example here) sent in the query string of the URL, rather than in the request body. And that is if we use the --get
option (short version is -G
). Here's what the man page says about this option:
When used, this option will make all data specified with -d, --data, --data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data will be appended to the URL with a '?' separator.
Perfect!
So the curl
equivalent of requesting the URL above, where the whitespace remains, is as follows (I'll also add --verbose
so we can see what happens when we send the request, and a Content-Type: application/json
header too):
curl \
--get \
--verbose \
--header 'Accept: application/json' \
--data-urlencode '$filter=Discontinued eq false' \
--data-urlencode '$select=ProductName' \
--data-urlencode '$top=3' \
--data-urlencode '$orderby=ProductID' \
--url 'https://services.odata.org/V4/Northwind/Northwind.svc/Products'
Here's what this produces (some verbose output removed):
> GET /V4/Northwind/Northwind.svc/Products?$filter=Discontinued%20eq%20false&$select=ProductName&$top=3&$orderby=ProductID HTTP/1.1
> Host: services.odata.org
> User-Agent: curl/7.74.0
> Accept: application/json
>
< HTTP/1.1 200 OK
< Content-Length: 195
< Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8
< Date: Tue, 08 Aug 2023 11:43:25 GMT
< Server: Microsoft-IIS/10.0
< Access-Control-Allow-Headers: Accept, Origin, Content-Type, MaxDataServiceVersion
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: DataServiceVersion
< Cache-Control: private
< Expires: Tue, 08 Aug 2023 11:44:26 GMT
< Vary: *
< X-Content-Type-Options: nosniff
< OData-Version: 4.0;
< X-AspNet-Version: 4.0.30319
< X-Powered-By: ASP.NET
<
{"@odata.context":"https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Products(ProductName)","value":[{"ProductName":"Chai"},{"ProductName":"Chang"},{"ProductName":"Aniseed Syrup"}]}
So there you have it. With curl
you can have your cake and eat it. If you're not using curl
, give it a spin today. After all, as well as being used everywhere on earth, it's also used on Mars.
And that was to cure the stove.
The stove is a model from Chilli Penguin, specifically a "Fat Penguin, Tall Order". You can read more about it in the post Living on a narrowboat - the stove as the heart of the home which I wrote in eager anticipation.
The whole journey of getting the stove delivered and fitted on the narrowboat was a little fraught, with manufacturing and delivery delays, but in the end, it arrived just in time for the team at The Fitout Pontoon to install it before launch day. Now I'm on the narrowboat, I couldn't wait to try it out.
Like any painted cast iron stove, it has to be "run in", run at a low, then medium, then high temperature burn, to cure the paint and also burn off any solvents. The cooler weather today had me convinced that it was a good time to do that. I also wanted to properly set up the flue and chimney, which the team had ordered and fitted; the part that goes outside is quite tall (1 metre, not including the rain cowl) and I wanted to try it out. You can see the chimney and cowl here, as I'd set it up to check it for size, when I was moored at Redhill Marina:
The chimney girth is also larger than I've seen; it's double-skinned, based on a flue diameter of 130 mm, which you can see here in this top-down view of the chimney collar fitted to the roof:
So anyway, I followed the instructions in the Chilli Penguin stove guide, which basically described a sequence of three fires, in one session:
I had bought a Valiant Stove Thermometer from Sandiacre Stoves in Long Eaton, when I was on the Erewash Canal a week or so ago. What a lovely shop, with a great range of products and super friendly staff. I bought other stove accessories from there then too, including a coal scuttle and a big bag of split logs.
I wouldn't have managed the sequence and temperature control of the process without the thermometer, which worked well. It has a magnet, and I'd originally placed it on the stove top, but then thought better of it and stuck it to the fire door, which would give me a more accurate reading, because there was the cooking oven that sits between the fire box and the stove top. I didn't attach it to the flue either for a similar reason - it's also double skinned like the chimney.
The small fire started well and I could see that the flue and chimney was producing a great draw:
The hot fire was pretty intense, and it was at that point I re-assessed the temperature of the day, perhaps it still wasn't cold enough outside to light a fire! Anyway, I had committed so I just opened more windows and let the process complete. I decided that a beer would help me cool down. Pretty reasonable logic, right?
I noticed that while the stove was hot, during the hot fire, the oven door wouldn't open. It turns out that this was because the catch wasn't quite sitting right. Luckily the catch is simple and protrudes from the iron surface of the front of the stove, so I was able to adjust it slightly so that the door handle catch latched onto it more cleanly, without clashing and getting stuck on it.
Clearly, given its height, the chimney is not going to fit under many bridges. In other words, it's not a "cruising chimney", which are typically more around 30 cm high and are such that you can keep a fire in while cruising. I'm going to try and source a shorter chimney in the medium term (it's from Jeremias and the dimensions mean that it's not something I can buy off the shelf in a chandlery, for example) but I also may try fitting just the cowl itself to the collar. It may well be that this won't deliver enough draw for a fire, but we'll see.
While passing Sawley Marina a few days ago, I called in at the chandlery there to pick up a few supplies, including a heat-powered stove fan, also a Valiant model.
These are designed to sit on top of the hot stove, and use the heat to power a small motor, which in turn drives the fan, which distributes the warm air throughout the cabin.
I tried this out too, and it worked well. Having read the instructions, I noted that for optimum performance and lifetime, I should place it behind and not in front of the flue (I think otherwise the motor and other small components wear out sooner due to the direct heat from the proximity to the flue). You can't see it in action in the photo above; I only just remembered I had it towards the end of the cure process.
So. That's the stove cured! Is it wrong that I'm hoping for some cold days soon so I can use the stove for real?
]]>While I was at the marina I had a day off work, and found that the welldeck, i.e. the space in the bow, was perfect for relaxing.
After an early morning run, plus topping up my water tank (the capacity is around 450 litres), emptying my toilet cassette, and using the washing machine and dryer in the laundry block next to the Egret moorings, it was time to get some work done before I moved off the finger pontoon, out of Mercia Marina, and onto the Trent & Mersey canal again.
The Developer Advocates team is running a series of monthly Developer Challenges, with my friend and colleague Nico running the July challenge on CAP. I'll be running the August edition, so my work today involved making some preparations for that. The team puts in a lot of work behind the scenes to create content and events for SAP developers, and I'm proud to play my part.
I chatted with my son Joseph about the route I was planning to take, and smiled when he mentioned the calculation for the approximate journey time. It's just a rough and ready measure but is enough to get an idea. You take the number of miles, plus the number of locks you'll pass through, and divide that total number by three to get the hours it will take. Of course, there are aspects that will make reality a little different.
One aspect is your cruising speed and the number of moored boats that you'll pass; the rule is that you should pass moored boats at tickover speed, to avoid having the wash from your forward motion rock the boats and potentially loosen mooring pins and ropes.
Another aspect is lock navigation. Some locks take longer to pass through than others. Not only because it depends on whether the lock you're approaching is "in your favour", i.e. the level of water in the lock is at the level you're at, and you don't need to empty or fill it. But also because some are simply bigger than others, more difficult to work within as well as to fill and empty.
An example of the contrast is between Sawley Locks No 2 and Dallow Lane Lock No 7. Sawley Locks are are ginormous and electrically operated (the lock gates are opened and closed via hydraulic rams). When I passed through last week, I felt my narrowboat was like a small rubber duck bobbing around in the bath. Passing through Dallow Lane Lock, which I did on this route today was in sharp contrast. It's a single width lock (my 6'10" narrowboat only just fit) with two tiny gates at the lock tail and a single gate at the head. It was also a fairly shallow lock, with a rise of only three and a half feet. The combination of it being a single lock and having a small rise meant that I could get from one side of the lock to the other by walking across my boat, either over the roof (when the lock was empty) or across the stern (when it was full). In turn, this meant that operating the lock (single handed, as I am) took far less time. I haven't got any photos of either lock, but there are photos in the two linked resources.
Anyway, according to CanalPlanAC, the route's distance was 5 miles, 2Ā½ furlongs and 1 lock. Which equates therefore to approximately 2 hours travel time. Here's the route, courtesy of CanalPlanAC:
You can't quite see all the detail (including Dallow Lane Lock) due to the zoom factor, but you get the idea.
I took it easy, and arrived at Shobnall Fields Visitor Moorings after just over two hours.
After passing through Dallow Lane Lock, I moored up a few hundred metres later, to check out the best place to moor for the evening. The visitor moorings (on the opposite bank to the towpath) seemed quite full, but I managed to squeeze myself on the end.
The place is pretty peaceful, and my spot affords a lovely view over the park. You can see in this photo the mooring restrictions. I have a continuous cruising licence and generally one can moor for up to 14 days in a single place, but must then move on. There are exceptions to this, and here is an example, where you can see that boats can only moor here for two days. I'll be moving on yet further west on Sunday, so this suits me fine.
Of course, being a Friday, and being in Burton-on-Trent, probably the most famous town in the UK with regards to beer and beer history, it was only right that I celebrated the start of the weekend with a beer. And so I did. Cheers!
]]>These are the resources I use; there may be others of which I'm not aware, but this combination provides me with everything I've needed so far.
There are some great online resources. The Canal & River Trust's website is a good source of information generally, but it also has some great maps of the canal system.
You can dive into the detail either by selecting a canal or river from the list:
Or you can start with the overall network map:
and then zoom in to see the detail:
In this section, showing where I am right now (in the marina shown just west of Willington Rd) and a short section of the Trent & Mersey Canal, where you can see lots of detail such as:
"winding" is pronounced as you would say "wind" as in what blows, rather than what you do with a clock
For me, the winding points are, from a navigation perspective, one of the most important features, in that they are almost the only places where you can turn round (canal junctions offer space too, but they are few and far between). They're short sections of the canal (usually between around 10 and 20 metres of canal length) where the canal is extra wide, usually going into a point, giving space to turn the narrowboat by 180 degrees.
In the case of this map section, it was good to know that there was a winding point in Willington, as I was coming along the Trent & Mersey canal from the east, and could rest assured that if I missed the entrance to Mercia Marina (just after bridge 22, going west) and had gone too far to reverse, I could continue to the winding point in Willington to turn round and go back.
Don't ask me why some bridges have letters after the numbers, and some don't. I haven't figured that out yet. I'm guessing it relates to where new bridges are added and there's no space numerically to insert them (why does this remind me of line numbering in steps of 10 in BASIC?).
This is in fact a good example of where Google Maps is actually very useful; often I'll use the satellite view to see what the turning point looks like, to gauge for example whether it's of a decent size, and what's nearby:
I can also avail myself of any street view resources too, which helps:
The combination of these CRT and Google Maps give me a good start in working out where I'm going, what's available where, and how I might get to my destination.
But there's an online resource that is far better suited to planning a journey on the canal system, and that is the awesome CanalPlanAC.
You can plan journeys from A to B, find and explore points by name, and get everything you need (and more) from the data that has been clearly lovingly gathered. I'd heartily recommend you explore what it has to offer, as I cannot begin to describe its richness.
As well as having information on places, and how those places relate to other nearby places (this next screenshot is of the place Shardlow, where you can also see the nearest other places of interest, including pubs, winding holes, bridges and locks), you can plan routes in great detail.
Here's an example of a route that I'm planning for the coming weekend, towards the village of Alrewas.
First, having looked at the CRT and Google Maps to work out roughly where I think I want to end up, I search for a point in CanalPlanAC, which offers me a list of specific and known places:
On choosing "Alrewas Road Bridge No 48" (which is roughly where I plan to moor on arrival), I'm taken to the custom journey page with everything I need, including a detailed route:
and an accompanying (and zoomable) map:
While navigating I have my phone by my side, and have the Open Canal Map app open and ready to look at. It's a good combination of the CRT and CanalPlanAC resources, and is useful to quickly double check route details and re-confirm the locations of upcoming locks and winding points.
I have some of the Collins Nicholson Waterways Guides which are ringbound books that fold nicely flat and have maps and plenty of curated detail, with some useful opinionated navigation advice too based on experience.
Here's my collection, showing one of the guides (Four Counties & the Welsh Canals) open and inside a waterproof pouch so I can keep it on the stern with me even in the rain:
Of course, because canal navigation is on a fairly small scale, given the travel speed and the distances planned for any given journey, it's often the case that one can walk or run ahead to see what's coming. I have used some of my morning runs to do this; here's an example of a run from Shardlow, where I was moored, to Weston Lock, and back, to see what both Weston Lock and Aston Lock were like, and to check the canal stretch along the way:
]]>Specifically, the gearbox oil needs to be changed after 25 hours. While I'd participated in a great course at the Narrowboat Skills Centre on boat engine maintenance, the reality of tackling this myself on a new engine, and not messing it up, was a little daunting. So I'd booked in with Streethay Wharf Engineering, who are based in Lichfield but also have on-site facilities at Mercia Marina which is where I was heading, and where I'd approximately be when the engine hour count reached 25.
At just after 0730 I quietly moved off my mooring on the Trent & Mersey Canal where I'd spent the night. Here's a snap of the view from the stern of my boat that evening.
The mooring was just before the entrance to the marina, into which I carefully and slowly manoevered, backing onto the first pontoon, one of a few that Streethay use there. I didn't do too badly, there was plenty of space in front for me to turn to port before reversing.
I waited a while for the engineer to come by; there was a mix up at the office, and that, combined with the general concept of "canal time" (things will happen when they happen) meant that I had to gently persuade them to honour the agreed morning time for the service.
While I waited I took the opportunity to empty my toilet cassettes in an Elsan facility there, and fill up my water tank. I even scrubbed down the roof, bow and stern. Always something to do!
David from Streethay came by around 1130 and I asked if he would mind if I watched what he did. He was superb, and we had a great chat while he changed the gearbox oil. I watched him intently and am confident that I could do it myself next time. Here's what he did:
the gold nut has a dipstick / level indicator attached
The oil David used was Comma's "Gear Oil EP80W-90 GL-4 Mineral" (marked GBP 10.00 per litre) and approximately 1.5 litres were replaced.
After that it was time to head over to a visitor mooring spot in the marina. I'd previously booked a week on a visitor mooring with Mercia Marina but circumstances had conspired to prevent me from making it. However the lovely Jules, who works in the office, had kindly agreed to credit those days to my account, which was great.
I had agreed with Jules to take three of the seven days I had credit for, and she assigned me a mooring at Egret 15, finger pontoon position 15 in the "Egret" area which is specifically for 57' boats.
To get to the "Egret" area I had to navigate through a large part of the marina. This was quite daunting, given the number of folks walking up and down the boardwalk, and others watching as they drank their coffee in the cafes that overlooked he water. I took it super slowly, alternating between forward gear and neutral to introduce some extra slow forward motion; steering only works when the propeller is pushing water past the rudder though, so the neutral gear sections were quite short! Anyway, I move the boat into the "Egret" area, and used a combination of forward and reverse movements to gently and slowly back onto the finger pontoon. And I managed it, without any bumps, scrapes, or other collisions, pulling the boat down the pontoon manually with the centre rope for the last part. Phew!
Can you spot FULLY RESTFUL in there? It looks rather diminutive next to its neighbours!
I explored the marina which is rather large and which has plenty of facilities, including multiple blocks each with toilets, showers and laundry.
There were also refuse areas for general waste and rubbish for recycling. I took advantage of every single one of those facilities. I was very impressed with the organisation and cleanliness. Given the price of a visitor mooring (GBP 15.00 per day, reduced to GBP 13.00 per day for a week, which is what I'd originally booked), it's pretty good value.
That said, laundry was extra - I had pay a deposit for a credit card sized device and load money onto it, but the wash and dry cycles were reasonable (GBP 3.50 for a wash, GBP 1.20 for a dry). I also had to pay a GBP 20.00 deposit for a key fob which gave me access to all the non-public areas of the marina.
I slept soundly yet again. I think the open air and floating home suits.
]]>This time it was in Guetersloh, hosted by Reply and the very friendly and helpful Raphael Witte. I arrived in the warm early morning after a short bus ride from the town centre - in fact the bus dropped me off right outside the offices!
The setup was excellent. We had a breakout room plus two work rooms with plenty of power and Internet connectivity. The rather advanced TV / projector display mechanism almost had us foiled ... but after a while we figured it out, although at one point we were projecting onto the large display via a Teams meeting between me and Raphael, where I shared my screen and he relayed it to the display through his software-based connection to the display share device. Give me old fashioned direct HDMI cables plugged into the back any day of the week :-)
The participants were all eager to get started, and all had a great can-do attitude that we needed to work around some initial access challenges. In the end, in fact, every participant ended up going for the VS Code + dev container based working environment which worked brilliantly for everyone. This was also a great testament to the power and flexibility of dev containers, about which I have written in the past, in a three-part series Boosting tutorial UX with dev containers.
We worked through the exercises, learning together about extending existing services and schemas, external APIs from SAP S/4HANA Cloud, the SAP Business Accelerator Hub (formerly known as the SAP API Business Hub), and how to find and dig into APIs that are detailed there.
Then we set about importing an API definition into an existing project, learning along the way about internal and external mocking, various useful features of the CAP CLI (cds
), and created a separate profile in the environment for a direct connection to the SAP Business Accelerator Hub's sandbox systems.
Not only that, but we learned about the different levels of integration, and got a hands-on feel for the best ways to extend existing CDS service and entity definitions, as well as wrap imported external services with a reduced surface area.
All this brain work and conversation was boosted by a midday break for lunch, which was provided by Reply and was delicious, thank you!
All in all it was a very enjoyable day. All the participants worked hard, had some great questions which provoked interesting side discussions. This is a key part of SAP CodeJams - the conversations and collaborative learning.
Thanks again to Reply and to Raphael for hosting, and to everyone for showing up and taking part!
]]>Feedback for "Data Lake API": https://github.com/SAP-docs/sap-datasphere/issues/13
Taking the basic information available for this issue via one of the endpoints in the Issues API, we get an object returned with properties that we can list like this:
gh api \
--cache 1h \
repos/SAP-docs/sap-datasphere/issues/13 \
| jq keys
There are quite a few properties, many of them ending _url
:
[
"active_lock_reason",
"assignee",
"assignees",
"author_association",
"body",
"closed_at",
"closed_by",
"comments",
"comments_url",
"created_at",
"events_url",
"html_url",
"id",
"labels",
"labels_url",
"locked",
"milestone",
"node_id",
"number",
"performed_via_github_app",
"reactions",
"repository_url",
"state",
"state_reason",
"timeline_url",
"title",
"updated_at",
"url",
"user"
]
A simple with_entries, which is actually just a combination of its two sibling functions: to_entries | map(foo) | from_entries
, does the trick:
gh api \
--cache 1h \
repos/SAP-docs/sap-datasphere/issues/13 \
| jq 'with_entries(select(.key | endswith("_url")))'
This gives:
{
"repository_url": "https://api.github.com/repos/SAP-docs/sap-datasphere",
"labels_url": "https://api.github.com/repos/SAP-docs/sap-datasphere/issues/13/labels{/name}",
"comments_url": "https://api.github.com/repos/SAP-docs/sap-datasphere/issues/13/comments",
"events_url": "https://api.github.com/repos/SAP-docs/sap-datasphere/issues/13/events",
"html_url": "https://github.com/SAP-docs/sap-datasphere/issues/13",
"timeline_url": "https://api.github.com/repos/SAP-docs/sap-datasphere/issues/13/timeline"
}
What to_entries
, from_entries
and with_entries
gives us is a way to process properties the names of which are unknown to us until execution time. Each property is normalised into a static structure with well-known property names. Here's an example:
jq -n '{question: "Life", answer: "Forty Two"} | to_entries'
This emits a stable, predictable structure:
[
{
"key": "question",
"value": "Life"
},
{
"key": "answer",
"value": "Forty Two"
}
]
And from_entries
reverses the conversion that to_entries
performs. For example:
jq -n '
{question: "Life", answer: "Forty Two"}
| to_entries
| map(.value |= ascii_upcase)
| from_entries
'
This emits:
{
"question": "LIFE",
"answer": "FORTY TWO"
}
So this can be replaced simply with:
jq -n '
{question: "Life", answer: "Forty Two"}
| with_entries(.value |= ascii_upcase)
'
This has the same effect (because it's just syntactic sugar for the version with to_entries
and from_entries
):
{
"question": "LIFE",
"answer": "FORTY TWO"
}
I'm going to try and write more of these short "snippet" posts, to break the cycle of only writing longer, more detailed and therefore less frequent ones. Let's see how it goes.
]]>This wasn't the first instance of a CodeJam on this topic; the inaugural outing of the content took place in Utrecht, NL in February (see SAP CodeJam on Service Integration with CAP in Utrecht) so I was generally happy with how the content flowed. Nevertheless, I had been working on it recently, running up to this event:
Looks like at least one of my Developer Advocate colleagues Antonio has been putting work in on his CodeJam content this week too!
If you want to find out more about the CodeJams that we offer, I recommend you read this post from Tom Jung: So, You Want to Host a CodeJam! Everything you need to know, which also links to the list of topics available right now.
I started my journey to Brescia, specifically to the offices of Regesta S.p.A., the kind and welcoming hosts for this particular CodeJam instance, on Thursday morning in Manchester. I started out with a bus from home to Manchester Piccadilly station, for a train from Platform 13 to the airport.
After the flight, which was uneventful (even accounting for the usual experience at Manchester Airport), and a train from Malpenso airport, I reached the centre of Milan at the spectatular Milano Centrale station (you can see more photos of the station in this toot):
After a quiet evening and a good coffee at breakfast, overlooking the station:
I was ready to get the train from Milan to Brescia:
There I was met at the station by Valentino, the CodeJam organiser at Regesta. We travelled through the morning rush hour to the office which was perfectly set up for a great learning experience, and we were all soon underway.
The day flew by.
I can honestly say this was one of the most diligent groups of CodeJam attendees I've had the pleasure of working with. Everyone got properly involved in the content, asked great questions, worked with each other through each exercise, and made it easy for me to convey all the concepts and details. Thanks folks!
During lunch, provided by our kind hosts, we got a chance to chat more.
We also got another chance at the end of the day, where I learned from a Regesta developer about the awesome tools he's been working on - an NPM-like package experience for ABAP, compatible with and designed to complement abapGit. Definitely worth keeping an eye out for in the near future!
Perhaps it's worth explaining at this point what this specific CodeJam focuses upon.
Of course, you can get a general idea from the About this CodeJam section of the repo's main README file, but perhaps you want to know more.
In essence, we take a slow but sure, step by step approach to integrating an external service from the SAP Business Accelerator Hub (previously known as the SAP API Business Hub). In doing so, we take a route that introduces us to various CAP server features, cds commands, in-process and external mocking, initial data supply, and take a look at how to extend both services and entities.
Moreover, on that route, we learn about the cds environment, profiles, port control, and custom vs built-in resolutions of OData operation responses. Ultimately we bind in a real remote external service and have it work in harmony with our own local service.
Not only that, but we also dig deep into the philosophy and practicality of how, where and why we make changes and extensions in the places we do. Think of it as dipping into the essential topic of "keep the core clean" for CDS based services and mashups.
And of course, all the way through, we work through deliberate errors that are there for us to learn from and have fun with.
If this sounds like something you would like to experience, check out Tom's post that I mentioned earlier.
I headed back to Milan that same Friday evening to be closer to Malpensa airport for my flight, which is today (I'm writing this on Saturday). There I treated myself to a couple of excellent beers (a hyper local IPA and a West Coast DIPA) at a great place - Bierfabrik Milano.
I started writing this post at breakfast in the hotel this morning.
After another train journey back to the airport I'm finishing it off in the gate area while waiting for my flight back to Manchester, tired but happy at the conclusion of another successful CodeJam event!
]]>In the first post in this series, I'm moving onto a narrowboat, I showed an image depicting the design of my narrowboat. My old friend Edwin asked me to supply some descriptions of what each of the numbers referred to. I started to write down all the details in response, but I soon found that it would be far too long as a single post, so I've split it up. This post covers the items in the stern area.
Here's the design image again (you can open up the image in a new browser tab where you can see it full size):
Here are what each of the numbers in the stern area signify. Each item's title is taken directly from the detailed design documents drawn up by Mark at The Fitout Pontoon.
This is the rail that goes round the rear of the stern, partly I guess to stop you falling over backwards into the water, and also designed to be somewhere you can perch. "Taff" is one of many wonderful words I'm discovering as I embark upon this new journey, and a taffrail is basically just that, i.e. "the handrail around the open deck area towards the stern".
Here's a photo of my narrowboat's taffrail from mid way through the steel build phase, when the main hull and cabin was done and undercoated, and the steering mechanism (rudder and tiller) had just been fitted:
A Morse control is a lever which is used to control speed and whether you're going forward or in reverse. It's a pretty neat design which uses two cables internally, both of which are connected to different parts of the engine: one is connected to the gear selector and the other to the throttle (I learned about this and many other aspects of narrowboat engines, and maintenance thereof, on the excellent Diesel Engine And Boat Maintenance course run by the Narrowboat Skills Centre). Morse is a brand of such a mechanism and its popularity has turned the term into a proprietary eponym (a bit like how the term "Hoover" has come to refer to any make of vacuum cleaner).
When the Morse control is at its highest point (at 12 o'clock, as it were) then the engine is in neutral and the throttle at the base level. Moving it forwards will select forward gear (via one of the cables) and also increase the throttle (via the other) according to how far forwards you move it. Likewise moving it backwards will select reverse gear and also increase the throttle according to how far backwards you move it.
The Morse control is usually fitted to the side of a small column, or tower, on a cruiser stern, in easy reach of the person steering. On that column are also some engine displays such as the engine RPM, battery charge, and various status lights. If a bow thruster is fitted (as is the case with mine), bow thruster controls will also often be mounted on the column.
You can see the column, with a lid at the top (under which the engine display panel and bow thruster controls will be found), on the left in the photo earlier.
In this photo, courtesy of The Fitout Pontoon (from their page on Engine Controls) you can see a Morse control, bow thruster controls and an engine display panel.
Here the controls and panel are mounted on an internal panel, rather than on a control column on the stern. This is most likely because the controls in this photo are fitted on a traditional or semi-traditional layout narrowboat (where the stern is very different, usually a lot more compact, and there is often no separate control column).
Under the deck boards is the engine bay, which of course is where the engine is, but also other equipment, notably the Webasto diesel powered heating system (see the "Diesel" section of Living on a narrowboat - embracing constraints for more details).
Here's what's underneath the deck boards. The steelwork was done by JSR Boats and they've welded a little memento onto the bottom of the engine bay:
JSR 90 2022
i.e. the company's initials, the boat number (this is the 90th hull they've built) and the build year 2022. What's extra lovely about this is that this modest but beautiful detail will go unseen for the most part, being directly below the engine, when it's mounted onto those four stands.
And here's a shot of the engine so mounted (and connected to the propeller shaft), a 50hp Shire from Barrus:
One of the many aspects I hadn't even thought about is how to minimise the amount of rainwater entering this engine bay area. Deckboards aren't watertight. Having a huge, single board would help, but would also be very difficult to manage because of the size and weight, and where would you put it while you had it removed?
Instead, the engine bay cover is split up into multiple boards. Not only does that make it easier from a bulk and weight perspective, but it also means that it's easier to access parts of the engine bay that one needs to get to more frequently.
One of these parts is the weed hatch, which affords direct access to the propeller, so you can sort things out when debris gets around it. This means you don't have to get into the water to do it. You can see the weed hatch on the left of this photo, it's the rectangular box-shaped part that you can see through all the way to the workshop floor:
Before you ask, yes, there is a lid that goes on the top of the weed hatch, one that has a clamp and rubber seal, too! In fact, you can see the clamp in this next photo, which shows a key part of the solution to keeping rainwater out of the engine bay - gutters:
In the previous photo, you can see the main gutter that is built in to the stern and which and goes around the edge of the engine bay access hole (that the deck boards will cover). The gutter lengths in this photo will go across the width* of the engine bay access hole, at the points where the deck boards meet (slotting in to the three pairs of cut-outs you can see).
*Another boat related word I learned today in fact, while reading an article in this month's Waterways World magazine, is athwartships, which means exactly that - going across the width of the boat.
Finally in the stern, we have a couple of gas lockers. These are primarily to store standard 13kg gas canisters, and I wrote more about them in the Gas section of Living on a narrowboat - embracing constraints, where there are a couple of photos. So I'll just add another photo here, showing how they're made into seats.
That's it for the stern items. Continuing to move forward towards the bow, I'll cover the items in the galley area next. Thanks for reading!
]]>I noticed that I had developed a workflow where I would:
Yes, I could just use ijq to run the expression for real and get the results, but the command line is both part of my IDE, my scratchpad, my recent memory and much more, so it's important that I have the jq invocation and expression in my command history, and it's also then ready for further processing with more commands in a pipeline that I can add (super easily through the power of vi mode).
Anyway, I finally recognised that this was suboptimal and decided to do something about it.
90% of the time I'm working in a dev container. Whether that's running on my SAP-supplied MacBook Pro, or on one of my own Chrome OS devices, or even remotely, via Tailscale, on my Raspberry Pi at home.
The definition of my dev container is in my dotfiles repo, and if you examine it, or watch some of the episode replays of our Hands-on SAP Dev show on our SAP Developers YouTube channel, you'll see that I use tmux, an awesome terminal multiplexer. Beyond the obvious and visual superpowers it offers, tmux also surfaces session, window, pane and buffer management to the command line level, which gives me access to them and enables me to make use of them too.
The last part of the context is that the underlying OS in my dev containers is Linux, which means I have a native UNIX based environment in which to work, regardless of the actual physical machine I'm using.
Because of the context, mainly tmux and a Linux environment, but also the nice way ijq works, the solution was straightforward.
The way ijq works, as I've mentioned also in the comments in the script, is that it uses STDOUT and STDERR to split what it emits. On exiting, it will emit the results of the jq expression to STDOUT (i.e. the data you've grabbed or manipulated with the jq expression) and it will emit the jq expression itself to STDERR. If anything was amiss with the jq expression, it will also add the error detail to STDERR as well as ending on a high return code.
Anyway, to take advantage of tmux and how ijq works, I created a short Bash shell script, currently called zijq (the ABAP developers amongst you will know why). It currently looks like this:
#!/usr/bin/env bash
# Wrapper around ijq to capture the actual jq expression that was used,
# unless it ended in an error. The capture of the expression is into a
# TMUX paste buffer, so this will only be valid in a TMUX session.
# Just exec ijq directly if we're not in a TMUX context
[[ -z $TMUX ]] && exec ijq "$@"
# This is a temporary file to capture the jq expression in
declare tempfile
tempfile="$(mktemp)"
# When ijq ends, the output of the expression is emitted to STDOUT,
# and the expression itself is output to STDERR.
# Run ijq and capture STDERR and the actual RC
declare ijqrc
ijq "$@" 2>"$tempfile"
ijqrc="$?"
# Emit contents of temporary file to STDERR as ijq would
cat "$tempfile" >&2
# If things were OK, set the TMUX paste buffer.
[[ "$ijqrc" -eq 0 ]] && tmux set-buffer "$(cat "$tempfile")"
# Exit with whatever RC ijq ended with
exit "$ijqrc"
I've tried to explain the main parts in the comments, but here are a few extra notes.
When tmux is running, the environment variable TMUX
is set with some internal information, and it's not set when tmux is not running. So I'm using that to check whether the script is in fact running in a tmux context, and if not, I use Bash's exec
builtin to replace the current process (the script) with the execution of the normal ijq instead (there's no point keeping the context of the script around, hence exec
).
The separate lines declare tempfile
and tempfile="$(mktemp)"
are a result of the wonderful shellcheck which keeps me straight on Bash style, accuracy and nuances (see the post Improving my shell scripting for more on this). If you're interested in the specific trap here, see SC2155 Declare and assign separately to avoid masking return values.
On executing ijq, I capture both the STDERR output into a file, and the return code into a variable. A return code of zero means success, anything else is failure. I'm only capturing the return code because I want this script to emit it when finishing, as if it were ijq itself (in case I have something downstream that examines that).
To stay true to ijq's behaviour at this point, I also emit to STDERR (>&2
) whatever was captured there from the actual ijq invocation.
Most importantly, if the jq expression in my ijq session was OK (return code 0), then whatever was in the temporary file will be the expression, so that's when I use tmux's set-buffer command to put it into the buffer (in fact, there are multiple buffers, and lots you can do with them in tmux, check the man page for all the details). I can then just use the standard tmux key binding <prefix>[
to emit the contents wherever I am (which will be back on the command line).
Now I have this script, I can use ijq as normal (calling it as zijq, which I do often, and indirectly, via lf) and when I'm happy with the jq expression I've come up with, I have it in my buffer, as if I'd captured it from, say, copy-mode, and I can emit it wherever I want, such as on the command line, by hitting <prefix>[
.
You can see it in action here, as I exit to the command line, and paste in the jq expression into the jq -r '...'
invocation.
In the source data, each entity was represented by an object, but I only wanted to include properties whose value types were either strings, numbers or booleans. I ended up taking the simplest route to check, in an expression supplied to a call to select
, using type to check whether the type of a value was one of these.
What I found was another instance of the comma as generator that I wrote about a couple of weeks ago in Learning from community solutions on Exercism - part 2.
Moving away from the original source input, let's consider the simplest case where I want to pick out only numbers from a stream of values:
42, "hello", true | select(type == "number")
This produces:
42
So far so good. But what about picking out both numbers and strings? The simplest looking and perhaps idiomatic approach looks like this:
42, "hello", true | select(type == ("number", "string"))
As one would expect, or at least hope, this produces:
42
"hello"
But what exactly is going on with type == ("number", "string")
? Visually it's not too far from representing what we want. And in fact it's the same pattern as we saw in "car" | . == ("car", "truck")
in that previous post. Moreover, how does this actually work with select
?
I'd noticed that select
is defined as a builtin in jq itself:
def select(f): if f then . else empty end;
The jq manual says:
The function
select(foo)
produces its input unchanged iffoo
returns true for that input, and produces no output otherwise.
Before we try to use that, let's remove the select
from the expression for a moment to see what we get:
42, "hello", true | type == ("number", "string")
What we get is something that looks a little odd, at least at first:
true
false
false
true
false
false
How do we visually parse this? Well, it's two "pairs" of booleans, one pair for each of the input values 42
, "hello"
and true
, where each pair represents the result of comparing the type of the input value twice, with "number"
and with "string"
, in order. Splitting these pairs up with whitespace and adding some explanation, we get:
true :-- is number \ 42
false :-- is string /
false :-- is number \ "hello"
true :-- is string /
false :-- is number \ true
false :-- is string /
Then, reminding ourselves that the definition of select
is:
def select(f): if f then . else empty end;
then the values that stream through to select
are either emitted (.
) if the condition evaluates to true
, otherwise nothing is emitted (empty
) if the condition evaluates to false
.
This results in the following behaviour:
true :-- is number \ 42 / emitted --: 42
false :-- is string / \ not emitted
false :-- is number \ "hello" / not emitted
true :-- is string / \ emitted --: "hello"
false :-- is number \ true / not emitted
false :-- is string / \ not emitted
and thus:
42
"hello"
The fascinating thing is that if we were to have a duplicate entry ("number"
) in the parentheses on the right hand side, like this:
42, "hello", true | select(type == ("number", "string", "number"))
then our result would be different, and probably not what we were expecting:
42
42
"hello"
But knowing what's going on allows us to understand why. There are now three values each being tested not twice but three times:
true :-- is number \ / emitted --: 42
false :-- is string | 42 | not emitted
true :-- is number / \ emitted --: 42
false :-- is number \ / not emitted
true :-- is string | "hello" | emitted --: "hello"
false :-- is number / \ not emitted
false :-- is number \ / not emitted
false :-- is string | true | not emitted
false :-- is number / \ not emitted
While the superficial operation of this jq expression is sort of obvious, why it works is less so. At least to me. And in case it wasn't obvious to you either, I hope this has helped!
]]>On Thu 13 Apr we had an all-day SAP CodeJam Hands-on with the btp CLI and APIs, and on Fri 14 Apr, at the same location, there was the first ever public conference all about SAP BTP: BTPcon 2023. It made a lot of sense to run the two events next to each other; we got a lot of crossover conversation and attendance of both events from many folks.
Having arrived in Hannover on Wednesday evening, I made my way on the tram (A3 to Altwarmbuechen) to Isernhagen and Inwerken's offices. Inwerken were the kind hosts (spearheaded by SAP Community member Sascha Seegbarth) and provided us with a warm welcome:
They also had set up a great space to network and get down to business working through the CodeJam content, and this soon filled up with CodeJam attendees eager to get started:
One of the folks that came along, Matthias, was sporting his stickers (made by another SAP Community member, the great Ronnie Sletta) from our Hands-on SAP Dev live stream show. Go Matthias!
I was accompanied by fellow Developer Advocate Nico Schoenteich who joined up with me to run the CodeJam and help folks out, which was a welcome additional pair of safe hands, not to mention a great chance to work alongside one of my team mates. Thanks Nico!
The CodeJam proceeded and everyone successfully worked through all of the exercises. Again, one of the highlights, at least for me, were the discussions we had at the end of each exercise, before starting the next. There were some great questions and even more valuable opinions and thoughts shared all around. If you're interested in learning more about this particular CodeJam, head over to the main README where there's a general overview and also a list of the exercises.
On the following day there was BTPcon. Organised by Sascha and some great folks behind the scenes, this was a superb event. There were two types of sessions: hands-on sessions (150 minutes long) and talks (45 minutes long) and there was so much on offer that there were two tracks throughout the day:
You can get a feel for the depth and quality of content by taking a look at the programme on the BTPcon website. I enjoyed all of the sessions I attended, and the atmosphere was great. It's how conferences should be - a mixture of great technical content, interesting Q&A, and spontaneous corridor conversations where experiences and ideas were exchanged.
I was very grateful to get a speaking slot, for which I thank Sascha and the BTPcon organisers. I spoke about the Swiss Army Chainsaw of the JSON world, jq, the "lightweight and flexible command-line JSON processor" which just happens to be a very capable, Turing complete functional language.
In today's world of the cloud, which everyone knows is just Linux boxes glued together with JSON and shell scripts, having the power to handle JSON like a boss is super important. Even if you're not fully in the cloud, JSON abounds too - in representations of resources pulled and pushed with APIs, events and more.
Image courtesy of Enno Wulff
My talk consisted of me waving my arms about a lot, rambling, and working through example JSON scenarios and slicing through the JSON precisely with jq
.
I wrote the talk in the form of a document which more or less reflects what I said, and along with that document I've made the samples JSON files available, plus a Docker container description that you can use to build a container image which containing those files, with jq
and ijq
with which you can use to try out all the examples and follow along at home. Head over to the qmacro/level-up-your-json-fu-with-jq repo for everything I talked about and showed.
My talk also consisted of me wearing flip-flops which seemed to amuse some folks :-) I'd left my outdoor shoes (and socks) outside while the conference proceeded, but during my talk it started to rain heavily, soaking everything! Luckily, SAP Community member Enno (who was also one of BTPcon's co-organisers) was giving out SAP Inside Track Hannover swag, which included socks, which I gratefully accepted!
Image courtesy of Enno Wulff
It was a great two days, thanks not in a small part to Sascha and his colleagues at Inwerken, and of course to the CodeJam attendees and the conference attendees & speakers.
I hope that we can see a repeat of BTPcon next year!
And if you want to request a CodeJam, head over to Tom Jung's post So, You Want to Host a CodeJam! Everything you need to know.
]]>I was happy to be able to recognise a pattern in a tiny submission I made this evening to a repo of different language based implementations of a simple LED number display from Blag, an old friend of mine from the SAP world.
I wanted to contribute a jq version of the LED number display program for the repo, as it didn't have one. The first commit in my pull request contained a working version, which was this:
def segments: [
[" _ ", "| |", "|_|"],
[" ", " |", " |"],
[" _ ", " _|", "|_ "],
[" _ ", " _|", " _|"],
[" ", "|_|", " |"],
[" _ ", "|_ ", " _|"],
[" ", "|_ ", "|_|"],
[" _ ", " |", " |"],
[" _ ", "|_|", "|_|"],
[" _ ", "|_|", " |"]
];
def digits: tostring | split("")[] | tonumber;
[segments[digits]] | transpose | map(join(""))[]
This was to be invoked like this:
echo 42 | jq -r -f led_numbers.jq
which would produce:
_
|_| _|
||_
By way of explanation, assuming we have the two function definitions (segments
and digits
) already, then:
segments[digits]
would produce:
[
" ",
"|_|",
" |"
]
[
" _ ",
" _|",
"|_ "
]
One way of getting the "horizontal slices" of these LED numbers joined up onto the appropriate lines of output is to treat it as a matrix (in Conor & co's language I might use the term "rank 2") and transpose it.
So I wrapped this stream of arrays in an outer array with the array construction syntax []
:
[segments[digits]]
which gave me:
[
[
" ",
"|_|",
" |"
]
[
" _ ",
" _|",
"|_ "
]
]
which I could then transpose, which I did like this:
[segments[digits]] | transpose
resulting in:
[
[
" ",
" _ "
],
[
"|_|",
" _|"
],
[
" |",
"|_ "
]
]
These subarrays were now ready for joining together as longer strings:
[segments[digits]] | transpose | map(join(""))
with the following result:
[
" _ ",
"|_| _|",
" ||_ "
]
But I just wanted the plain strings, rather than have then enclosed in an array, so I used the array iterator to do that:
[segments[digits]] | transpose | map(join(""))[]
which gave me what I was looking for (remember that the -r
raw output is still being used), so a stream of strings is output but without the enclosing double quotes):
_
|_| _|
||_
Great!
This was what I send in the first commit in the pull request.
But the solution looked a little noisy, and after staring at it for a few seconds I realised that it was the last part that was bothering me:
map(join(""))[]
The pattern is: map(...)[]
. If I didn't want to keep the array shape, then why bother with the map
in the first place? It appeared to me that I could replace this with just the expression that I'd put inside the parentheses, in this case just the join("")
.
The only thing I had to do then, to allow the use of this pattern switch, was to embrace jq's natural streaming nature, and basically start streaming earlier in the pipeline, by using the array iterator directly on the output from transpose
, like this:
[segments[digits]] | transpose[]
This returns something similar to what we saw transpose
return earlier, but instead of a single value (an array of arrays of strings), it just returns a stream of the arrays of strings:
[
" ",
" _ "
],
[
"|_|",
" _|"
],
[
" |",
"|_ "
]
Then, each of these three array values are passed downstream, where I then only need the expression that was hitherto inside the map(...)[]
construct, i.e.:
[segments[digits]] | transpose[] | join("")
This indeed gave me the same result which I wanted:
_
|_| _|
||_
This version feels more idiomatic, and I updated the line to look like this in the second commit in the pull request.
Thanks to Conor and his cohorts for helping me remember to look for patterns!
]]>I was just checking through my solution to this simple exercise, to compare it with the community solutions. There was nothing earth shattering but a couple of things jumped out at me that I thought might be worthwhile mentioning. I solved tasks 1 and 2 in this exercise like this:
def new_remote_control_car:
{
battery_percentage: 100,
distance_driven_in_meters: 0,
nickname: null
}
;
def new_remote_control_car(nickname):
new_remote_control_car | .nickname = nickname
;
First, this is a pair of definitions of a function named new_remote_control_car
. One that takes no arguments, and one that takes a single argument. One would refer to them as new_remote_control_car/0
and new_remote_control_car/1
respectively.
The jq wiki has a lot of interesting things to say here, and in the Lexical Symbol Bindings: Function Definitions and Data Symbol Bindings section of the jq Language Description page, we see that:
Note well that foo, foo(expr), foo(expr0; expr1), and so on, are all different functions. The number of arguments passed determines which foo is applied. We can and do refer to the first as foo/0, the next as foo/1, and so on.
I can't help notice the "Note well", which is a direct English equivalent of the Latin-based initialism "N.B." i.e. "Nota bene". I wonder if it was deliberate. Anyway, I digress.
This is why the above approach works so well, and feels so clean. The call to new_remote_control_car
in the second function definition will invoke the version from the first function definition as there are no arguments being passed. I was initially thinking this feature might be a way to variadic function definition, but on reflection, this is something that feels different, as there's only a single function definition in the latter case, as can be demonstrated in this JavaScript example of a variadic function, that uses the spread syntax:
myfun = (...xs) => xs.reduce((a, x) => a + x, '')
myfun(1,2,3)
// => '123'
I had a fascinating conversation with my son Joseph about this (who is infinitely more knowledgeable than me in the area of programming, language paradigms, semantics and implementations) and he suggested that this multiple function definition approach could even be compared to how recursion is defined in Haskell, as illustrated in this classic factorial function definition:
factorial :: Integer -> Integer
factorial 1 = 1
factorial n = n * factorial (n - 1)
In a way the factorial
function is being defined multiple times (well, twice here), but one thinks about this in terms of a way to define a recursive based solution.
In the defintion of new_remote_control_car/1
above, we have this:
new_remote_control_car | .nickname = nickname
The output from a call to new_remote_control_car
goes through the pipe where .nickname = nickname
is then executed. This felt quite natural to me, defining (adding or replacing, or "upserting", to use that database oriented word) a value for a property.
There was an alternative approach to adding or replacing a property in an object, used in solutions from users glennj and bewuethr:
def new_remote_control_car($nickname):
new_remote_control_car + {$nickname}
;
This also works, because the addition operator +
can add objects:
Objects are added by merging, that is, inserting all the key-value pairs from both objects into a single combined object. If both objects contain a value for the same key, the object on the right of the + wins.
What makes this particular variation pleasing is the compact nature of the expression on the right of the +
, i.e. simply {$nickname}
.
This compactness comes about from:
{x: .x}
to be written as {x}
:The "full fat" version of the expression would have been:
new_remote_control_car + { "nickname": $nickname }
(There's also the difference between def new_remote_control_car(nickname)
and def new_remote_control_car($nickname)
but I'll leave that for another time.)
Another simple exercise, and this time I wanted to draw attention to the argument supplied as the "initial value" parameter of the reduce
function.
With reduce functions in general, I suppose I've gone through some sort of "journey of enlightenment" with respect to what's supplied as this "initial value":
Again, this is very obvious, but still worth calling out. Especially as I think that reduce is such a powerful and fundamental mechanism (if you're interested in reading more on reduce, see the Further reading section below).
Anyway, many of the community solutions used jq's reduce, which looks like this (in pseudocode):
reduce stream-of-values as $x (
initial-value;
generation of accumulated value using . and $x
)
Nearly all of them had a literal object as the initial value, representing "Stage 2", like this from user kruschk:
def count_letter_grades:
reduce .[] as $grade ({A: 0, B: 0, C: 0, D: 0, F: 0};
.[$grade | letter_grade] += 1)
;
This is already great, because it elevates the lowly "initial value" parameter to something more important than "just a simple starting value". The fact that it takes up quite a bit of space in the actual call (i.e. {A: 0, B: 0, C: 0, D: 0, F: 0}
) draws one's attention to it.
Instead of writing this starting object out literally, I wanted to generate it, so opted for "Stage 3". This is in no way "better" than the community solutions*, but in a similar way I think it helps to elevate the "initial value" parameter to something more important than it might be, merely by passing an expression. Here's what I used:
def count_letter_grades:
reduce .[] as $grade (
"ABCDEF"|split("")|with_entries({key:.value,value:0});
.[($grade|letter_grade)] += 1
)
;
*in fact it's worse in at least one way, in that I missed the fact that there's no "E" grade in the entire exercise!
With the expression "ABCDEF"|split("")|with_entries({key:.value,value:0})
I wanted to construct that literal object, and also use it as an opportunity to practise using with_entries
which is one of a set of three lovely functions. If you're interested, I wrote a post on this: Reshaping data values using jq's with_entries.
I'll admit that this solution is perhaps not as clear as the literal object construction; nevertheless, I like it because it makes me think more about the importance of that first parameter to reduce
.
If you are still unfamiliar with reduce as a concept, I'd heartily recommend taking some time to become familiar with it. Here are a few posts from me on the topic, and the "F3C" ones have links to corresponding "Fun Fun Function" videos on the topic, from the awesome mpjme:
Until next time, happy solving!
]]>In Living on a narrowboat - embracing constraints I briefly mentioned the stove I'm getting. This post is about choosing a stove in general, and how I came to my decision and the factors I considered.
For keeping the cabin warm, some modern boats might rely mainly on a radiator circuit. Some even have underfloor heating. Some have an electric heater and fan combination that pushes warm air into the cabin, a little bit like in a car. And of course some (I would suggest the majority) have a standalone stove of some sort.
And if you're going to have a stove, the choice of stove is important, and also based on what can be seen as constraints. Cabin size, smokeless fuel restrictions, and more. There's the question of fuel too - you can get stoves that run on diesel, wood, or wood & coal ("multifuel").
I spent some time on a narrowboat called Queenie that I rented from Star Narrowboat Holidays (you can see a picture of her moored up in the first post in this series) and Queenie had an electric heater and fan mechanism, plus a small multifuel stove. I didn't use the former as it was noisy and not particularly effective, so I relied solely on the stove. One of the trips was in February, when the temperatures in the UK are still fairly low (often around freezing) so it was a good test.
Queenie is a relatively small narrowboat, being only 50 feet in length, with a inside cabin length of probably 12 feet less than that. The stove on Queenie is the Hobbit model from Salamander. You can just see it in this photo from another post that I wrote about one of my stays on her (Allowing my intangible core to catch up with the rest of me...):
The stove itself is quite small, somewhat dwarfed by the flue pipe rising from it. You can check the stove's dimensions in the Technical Data section of the Hobbit's page. The output is nominally 4.1kW.
I noted both advantages and disadvantages to this stove on Queenie, based on its small size and heat output:
The multifuel nature of the stove meant that I could burn wood or coal (of the smokeless variety of course) but longer term, the inner dimensions would make it more onerous than I would like to find or cut up wood to fit the small space.
So I ended up discounting the Hobbit early on in my research.
A slightly larger stove is the Squirrel, from MorsĆø. When I lived in East Sussex, I had the 1410 model, and then the 1412 model more recently in Manchester. This newer model is DEFRA approved for use in smoke-controlled zones.
In fact, you can see it in quite a few of my Untappd beer check-ins, like in this one of the classic Tripel from Westmalle:
The output of the Squirrel is rated at 5kW, and in general it's larger than the Hobbit. Both these aspects would mitigate the issues mentioned earlier, and in fact this was the stove I was going to choose for the narrowboat, if I hadn't had a stroke of luck.
That luck came at the Crick Boat Show last year. First though, some background. As I mentioned in the post about embracing constraints, the stove would ideally serve not only as a general source of heating, but also provide a way to boil water for tea and coffee, a surface on which to put a casserole dish (or dutch oven) for slow cooking, and, as a bonus, have a separate oven for baking. These are often called "range" stoves.
In this context, before making the final decision to opt for the Squirrel (which had a reasonably sized top plate), I had a look around at multifuel range stoves, i.e. a stove with an oven built in. But my initial search was in vain. By that I mean there were plenty, but they were either too small, too large, too tall or would emit too much heat for the cabin and / or looked a bit too rustic.
I'd more or less shelved the idea of finding a range stove, but at Crick Boat Show last year I saw a range stove in one of the boats I went on to view (just to get ideas of layout, design, and so on). It looked almost ideal!
Speaking to the boat sales person, I found out it was from Chilli Penguin. It had a modern design, was about the right size (neither too small nor too large) and had a 5kW output rating:
Beyond the design and specifications, I particularly liked the oven above the fire box, and the stainless steel plate on top.
It turned out that Chilli Penguin offered a good selection of stoves around the 5kW output mark, all of which were going to be a good size for the narrowboat. Not too large, but not too small either.
The selection offered various combinations of aspects, and I ended up choosing the Fat Penguin (Tall Order), which has:
*Though I still may get a stove fan
Here are the specs:
It's worth mentioning diesel stoves before I finish this post. Until I started researching stoves for narrowboats, I'd never even come across one particular type, the diesel stove. Having a stove that uses diesel for fuel makes some sense, and is an attractive option for some people. It burns clean, and there's no ash or coke to clear out of the fire box. It's generally a lot cleaner all round.
And if your narrowboat is diesel powered anyway, you're going to be carrying diesel fuel already, for the engine and perhaps also for a diesel powered water heating system too (see the section Electricity and fuel for the engine and for the stove in the "embracing constraints post"). So a diesel stove would be a good logical choice.
But it's not all about logic. Certainly not for me. I like a real fire, I like the ability to forage for and gather fuel from fallen branches. And there's something mesmerising, not to mention cosy, in the flames of a real fire. So while stoves like the Refleks are a popular choice these days, I'm sticking with my multifuel experience.
This picture from The Fitout Pontoon's page on oil fired heating shows a typical Refleks diesel stove. You can see it sports a small round hot plate on top, but that's about it for double duty.
So while I would have been happy with the MorsĆø Squirrel 1412, I'm even happier with the prospect of the Fat Penguin (Tall Order) stove. It's on order, and delivery is due some time in June. It's this one, although I'm getting it in grey, not red:
I'm looking forward to seeing it installed and on board soon!
Next post in this series: Living on a narrowboat - layout details of the stern.
]]>def needs_license:
any(. == ("car", "truck"); .);
def needs_license:
if . == ("car", "truck") then true else empty end // false;
def needs_license:
(. == ("car", "truck") | select(.)) // false;
Each variant, when fed the same input, like this:
"car", "truck", "bike" | needs_license
produces the same output, i.e.:
true
true
false
(It's worth pointing out before continuing that none of these variants will fall foul of the gotcha I discovered with contains
/ inside
, so I can move on from testing whether true
will be returned for "car"
when the possible vehicles listed includes "cart"
and put that behind us.)
In this post, I'll take a brief look at generators, and then look at each of these solutions in turn, i.e.
There's a section that's common to each of these functions, and it's this:
. == ("car", "truck")
This struck me right between the eyes. Given the context that the value .
would be a vehicle string e.g. "car"
, I can't help but admit I was wondering what the heck was going on here. How can a string be sensibly compared with what looks like a list of strings?
So I decided to dig in, and am glad I did.
It becomes quickly clear that ("car", "truck")
isn't a list in the sense I was thinking about. First, the parentheses are just for grouping, not for any literal list construction. So let's omit them for a second. In fact, let's reduce the expression to something simpler, to see what I get:
"car" | . == "car"
# => true
So far so good. But what happens when I add the "truck"
value?
"car" | . == "car", "truck"
This gives us:
true
"truck"
The output is not a single JSON value, there are two, one from either side of the comma. And looking at the Generators and iterators section of the jq manual, I discover that:
Even the comma operator is a generator.
What is a generator? Something that produces zero, one or more values. I've used iterators and generators in JavaScript, and also in Python, so the concept is at least familiar to me.
What's happening here is that the comma is acting as a generator, producing (in this case) a value resulting from the expression on its left ("car" | . == "car")
, and a value resulting from the expression on its right ("truck"
). This is also why the output is as it is, and not, say, [true, "truck"]
; what's produced is not an array, but a stream of two discrete (and independently valid) JSON values.
And the difference between this and the version with the parentheses is becoming clearer now. What happens when I add them?
"car" | . == ("car", "truck")
The grouping that the (...)
brings doesn't affect the generator nature of the comma, it just causes the . ==
part of the expression to be applied to the group of strings ("car"
and "truck"
), one by one. So this results in:
true
false
In other words, it's the equivalent of:
"car" | . == "car", . == "truck"
I wanted to dwell a little more on this comma-as-generator. Here are a couple of very simple examples:
1, 2, 3, 4, 5
This, unsurprisingly, produces:
1
2
3
4
5
But I know know what's actually happening, and the stream of scalar JSON values is more obvious.
(This subtlety reminds me of another subtlety in LISP, where list construction can be done via the list
function: (list 1 2 3 4 5)
which produces (1 2 3 4 5)
, or more explicitly using the cons
function: (cons 1 (cons 2 (cons 3 (cons 4 (cons 5 nil)))))
which also produces (1 2 3 4 5)
. We're not constructing lists here, but there's a vaguely similar feeling in how things are constructed. But anyway, I digress.)
How about using functions either side of commas, functions that produce streams of values?
[1,2]|map(.*10)[], range(3)
This produces a stream of five individual JSON scalar values:
10
20
0
1
2
Note that the important part of the expression to the left of the comma in this example is the array iterator, i.e. the []
part. If we were to omit that:
[1,2]|map(.*10), range(3)
we'd get this:
[
10,
20
]
0
1
2
This is a stream of four JSON values, the array being the first one.
In part 1 of this series, in looking at some alternatives for the Vehicle Purchase exercise, I noted that the any
function can be used with 0, 1 or 2 parameters.
In Matthias's first function example, we see the any/2
in use:
def needs_license:
any(. == ("car", "truck"); .);
The jq manual says the following about this form of any
:
The
any(generator; condition)
form applies the given condition to all the outputs of the given generator.
So the first argument passed to any/2
is exactly the expression we've been looking at thus far, i.e. . == ("car", "truck")
. And it's supplied to the generator parameter.
The second argument being passed is .
which is supplied to the condition parameter.
So how is this function body to be interpreted? Trying out a simple call to any/2
helps me understand it a little more; the expression returns true
here because at least one of the values (2
) emitted from the generator expression 1,2,3
is divisible by 2:
any(1, 2, 3; . % 2 == 0)
# => true
Even more simply, I try this:
any(null, false, true; . == true)
# => true
In fact, this can be simplified to:
any(null, false, true; .)
# => true
The values (null
, false
and true
) in the generator expression are considered in the context of the condition expression .
and this of course then evaluates to true
due to the third value being truthy. I deliberately used the word "truthy" here as this also works:
any(null, false, 42; .)
# => true
In working slowly through this, I realise what looked odd to me about Matthias's first function solution, given the any(generator; condition)
signature - the generator expression looks more like a condition expression:
def needs_license:
any(. == ("car", "truck"); .);
But now having a better understanding of how . == ("car", "truck")
works as a generator, things are now clear. Piping the value "truck"
into this function, for example, gives us what we want:
"truck" | needs_license
# => true
And to make sure I see what's going on, I can insert a couple of debug filters in-line with the generator:
def needs_license:
any(debug | . == ("car", "truck") | debug; .);
"truck" | needs_license
Look at what that gives us (I've added some blank lines to better distinguish things):
["DEBUG:","truck"] From the 1st debug, value going
into the generator.
["DEBUG:",false] From the 2nd debug, these two values
["DEBUG:",true] are emitted from the generator.
true the final result produced by the call
Here's the next sample solution:
def needs_license:
if . == ("car", "truck") then true else empty end // false;
This looked a bit odd to me too. Knowing that . == ("car", "truck")
is essentially a generator of multiple values, what's going on here? Multiple values in the condition part of an if-then-else construct?
Well, the jq manual has the following to say in the context of if A then B else C end
:
If the condition A produces multiple results, then B is evaluated once for each result that is not false or null, and C is evaluated once for each false or null.
What does this look like? To get a feel for it, I try this:
if "car" == ("car", "truck") then "yes" else "no" end
This produces:
"yes"
"no"
The "yes"
is from the "car" == "car"
returning true
(i.e. something that "is not false or null"), and the "no"
is from the "car" == "truck"
returning false
.
So far so good - and I know that multiple values from the generator expression can and do "flow through" the if-then-else construct. This also then helps me understand what is going on in the rest of the construct:
if . == ("car", "truck") then true else empty end
First, the true
and empty
values, in their respective positions here, are so that the if-then-else construct will emit true
(if there's a vehicle match) or nothing at all.
Using something like if . == ("car", "truck") then true else false end
is not going to work for us here, not least because it's redundant (it could be reduced to the actual condition, without the if-then-else at all) but mostly because it will produce multiple boolean values, whatever the input. Only one is wanted, and that's why empty
is used to throw away any false
values.
But that then leaves just true
or nothing being emitted, and this is what the // false
is for:
if . == ("car", "truck") then true else empty end // false;
Using this alternative operator (//
), false
can be emitted where there's no value coming from the if-then-else; in other words, whenever there are false
value(s) being emitted from the generator in the condition position.
To round off this section, I'll add a couple of debug
s to the body of the function to see with my own eyes what's going on (I've also added some extra whitespace for readability):
def needs_license:
if debug . == ("car", "truck") | debug
then true
else empty
end // false;
First, passing a vehicle that's not in the list, such as with "boat" | needs_license
, emits this:
["DEBUG:","boat"]
["DEBUG:",false]
["DEBUG:",false]
false
The value "boat"
goes in, two false
values are emitted from the generator, they both get turned into nothing (with else empty
) and then this nothingness is converted into false
with the // false
.
Now how about a vehicle that is in the list: "car" | needs_license
emits this:
["DEBUG:","car"]
["DEBUG:",true]
["DEBUG:",false]
true
The true
is emitted for "car" == "car"
, and then false
is emitted for "car" == "truck"
. The false
value is thrown away, but also because we still have a true
value coming out of the if-then-else construct, the // false
does not kick in, and we end up withat true
value.
While I still prefer the "any" based function solution to this one, I still think it's quite elegant, and it taught me to be aware of generators producing multiple values in the context of a condition in such a construct, and how to handle them.
The last of the function variants is this one:
def needs_license:
(. == ("car", "truck") | select(.)) // false;
Everything here except for the select(.)
has been covered already, so I can treat myself to slightly extended test, while omitting that select(.)
part (and the // false
) for now:
"car", "truck", "bike" | . == ("car", "truck")
This produces:
true
false
false
true
false
false
The order here is significant. That's three pairs of two booleans, from the combination of pairing "car"
, "truck"
and "boat"
, one at a time, with the two values "car"
and "truck"
:
Input | Compare with "car" |
Compare with "truck" |
---|---|---|
"car" |
true |
false |
"truck" |
false |
true |
"boat" |
false |
false |
The select function is described in the jq manual as select(boolean_expression)
thus:
The function
select(foo)
produces its input unchanged iffoo
returnstrue
for that input, and produces no output otherwise.
This description reminds me of the if <condition> then true else empty end
; the only difference is that select
returns the input unchanged and this if-then-else construct explicitly returns true. It just so happens of course that the input in this select
case is going to be boolean values too, so it has the same effect.
And because it has the same effect, it also needs to supply the alternative value false
when there's not a match, which is done again with // false
attached to the entire output of the combination of the generator and the select
function, i.e. this combination: (.==("car", "truck") | select(.))
.
I think the beauty here is the use of .
as the boolean expression that select
expects, conveying the values from the generator.
I hadn't planned to write this content in this second part of the series, but thanks to Matthias's contribution, I thought it was worthwhile. I've certainly had a good opportunity to dwell on the minutiae of these solutions and to get a better feel for streams of values in jq programs.
In the next part I'll continue to look at community solutions for some other jq exercises on Exercism, and explain what I missed, observed, and learned.
]]>As well as the direct benefit of practice, I've learned and been reminded of aspects of jq while looking through the community solutions. So I thought I'd write some of them up here, because writing will also help me remember.
I'll start with some simple observations:
map
, map_values
and the array/object iteratorEven in the basic learning exercise Shopping List there are subtle points worth talking about.
It's based on determining information from a shopping list, that looks like this (reduced for brevity):
{
"name": "Ingredients for pancakes",
"ingredients": [
{
"item": "flour",
"amount": {
"quantity": 1,
"unit": "cup"
}
},
{
"item": "sugar",
"amount": {
"quantity": 0.25,
"unit": "cup"
}
},
{
"item": "baking powder",
"amount": {
"quantity": 1,
"unit": "teaspoon"
}
}
],
"optional ingredients": [
{
"item": "blueberries",
"amount": {
"quantity": 0.25,
"unit": "cup"
},
"substitute": "chopped apple"
}
]
}
The first observation is about the contrast between the concept of arrays, with corresponding array functions like map
, and the concept of streaming in jq.
The third task in this exercise was to identify the amount of sugar, which I determined like this:
(
.ingredients
| map(select(.item == "sugar"))
| first.amount.quantity
)
The value of the ingredients
property is an array, and using map
like this produces another array, albeit with a single element (the object that represents the sugar ingredient). So I then used first
to grab that element, and navigated to the quantity
property). All fine. Having used map
in various languages, and learned to think about arrays and how functions such as map
, filter
and reduce
work (see FOFP Fundamentals of functional programming) this felt natural to me.
That being said, jq is fundamentally stream oriented, which can be seen in glennj's solution:
(
.ingredients[]
| select(.item == "sugar")
| .amount.quantity
)
Note the use of the array / object value iterator on the ingredients
property ([]
), and the lack of map
(and first
).
Expressing .ingredients[]
(as opposed to .ingredients
) explodes into a stream of values (one for every array element) which are each passed downstream (to select
and beyond). The select
then only allows the journey to continue for the element(s) that satisfy the condition, which means that the data coming through the last pipe is not an array but an object*.
*theoretically there could be more than one object coming through, but in this case there is just one.
Streaming in jq is an important aspect and can be a powerful mechanism to use.
Assembly Line is another learning exercise, where I decided to avoid an if ... elif ... else ... end
structure and instead encode the computation for task 1 (calculation of the production rate per hour) using an array as a kind of lookup table:
def production_rate_per_hour:
. as $speed
| (221 * $speed)
*
([0, 100, 100, 100, 100, 90, 90, 90, 90, 80, 77][$speed] / 100)
;
I prefer the way this looks, over a multi-condition if
structure, but there's a further improvement possible that I picked up, again from glennj in his solution, which was the avoidance of the symbolic binding of the input to $speed
(the . as $speed
part).
I'd used a symbolic binding because I knew I would need to refer to it both in the basic speed calculation (multiplying it by 221) and using it to index into the lookup table ([$speed]
). But glennj reminded me that I could just as easily have used .
directly:
def production_rate_per_hour:
. * 221 * [1,1,1,1,0.9,0.9,0.9,0.9,0.8,0.77][. - 1]
;
Note that tHE subtraction of 1 from .
here is because this lookup table was constructed without a dummy value of 0 for the theoretical 0 speed.
A useful reminder which helps me strive for better avoidance of all that is unnecessary.
In reviewing my solutions for this post, I came upon what I'd written for the last task in the High Score Board exercise, which was to find the total score, as illustrated thus:
{
"Dave Thomas": 44,
"Freyja ÄiriÄ": 539,
"JosƩ Valim": 265
}
| total_score
# => 848
I'd written the following:
def total_score:
[.[]] | add + 0;
As I mentioned earlier in this post, .[]
is the array / object value iterator. When I mentioned it back then, it was used to iterate over array values, i.e. the elements of the ingredients
array.
Now here it's being used to iterate over the values in an object. Not the keys, but the values, i.e. 44
, 539
and 265
. When I looked at it, I was reminded of the jq manual section on map and map_values which says:
map(x)
is equivalent to[.[] | x]
. In fact, this is how it's defined. Similarly,map_values(x)
is defined as.[] |= x
.
Also as I mentioned earlier, this iterator will create a stream of values, rather than an array. In other words, this:
{
"Dave Thomas": 44,
"Freyja ÄiriÄ": 539,
"JosƩ Valim": 265
}
| .[]
produces:
44
539
265
Note the lack of any semblance of an array - these are all single JSON values.
So in order to be able to use add
, which takes an array as input, I therefore also had to wrap this in an array constructor i.e. inside square brackets [ ]
:
{
"Dave Thomas": 44,
"Freyja ÄiriÄ": 539,
"JosƩ Valim": 265
}
| [.[]]
which gave me:
[
44,
539,
265
]
Anyway, forgetting this .[]
was acting as an object value iterator, I then thought "hmm, this is more or less the equivalent of map
", given what the manual stated ... so I replaced [.[]]
with map(.)
, like this:
{
"Dave Thomas": 44,
"Freyja ÄiriÄ": 539,
"JosƩ Valim": 265
}
| map(.)
This also gave me an array:
[
44,
539,
265
]
But the interesting thing was that this is map
being applied to an object, not an array, and I'm guessing it does the right thing in a sort of DWIM way (which I first came across in Perl). Even more interestingly, this use of map
on an object, which produces an array of the values in that object, contrasts nicely with map
's sibling map_values
, which, perhaps confusingly, doesn't do that.
In fact, I used map_values
in addressing the previous task in this exercise, to apply Monday bonus points, which I did like this:
def apply_monday_bonus:
map_values(. + 100);
What map_values
does is return the object but offer you the values to manipulate as each property is iterated over. So this:
{
"Dave Thomas": 44,
"Freyja ÄiriÄ": 539,
"JosƩ Valim": 265
}
| map_values(. + 100)
produces this:
{
"Dave Thomas": 144,
"Freyja ÄiriÄ": 639,
"JosƩ Valim": 365
}
and not this:
[
144,
639,
365
]
To complete the picture on this observation, I thought I'd mention the + 0
part in the solution:
def total_score:
[.[]] | add + 0;
If you supply an empty array to add
, it will produce null
:
[] | add
# => null
According to the addition section of the jq manual:
null can be added to any value, and returns the other value unchanged.
The Vehicle Purchase exercise is another learning one and was quite straightforward. My solution for the first task ("Determine if you will need a drivers licence") looked like this:
def needs_license:
. == "car" or . == "truck";
While this is fine because there are only two possible values for which we want to return true, the way I expressed this bothered me slightly.
In JavaScript, for example, I would have used an array to contain the values, and then used includes like this:
needs_license = x => ["car", "truck"].includes(x)
// needs_license("car") => true
// needs_license("train") => false
So after submitting my solution, I looked at what others had done. Quite a few used the same approach, as me, but there was a solution from IsaacG that looked more appealing.
def needs_license:
[.] | inside(["car", "truck"]);
This inside
filter looked to me like the JavaScript approach above. But my goodness, did it open up a rabbit hole of investigations!
Looking at it, one would think that this would do the job. I started looking at the definition of inside in the jq manual, and found that it was "essentially an inversed version of contains
". Before looking at the definition of contains
, I took a quick look at some of the examples, and saw this one, which made me scratch my head:
jq 'inside(["foobar", "foobaz", "blarp"])'
Input: ["baz", "bar"]
Output: true
This is not an element-wise check, it's a (sub)string based comparison, even when working with arrays.
I looked at contains and the examples and description had me more convinced that actually the use of inside
in the solution to this sort of task may not be ideal.
There's a sentence in the description of contains
which looks fairly innocuous, but in fact masks a major gotcha (emphasis mine):
[With
A | contains(B)
...] an array B is contained in an array A if all elements in B are contained in any element in A.
Before continuing, let's just understand what does "inside is an inversed version of contains" mean? Well, we can look at the source for inside
in builtin.jq
:
def inside(xs): . as $x | xs | contains($x);
We can see that this is effectively just switching around the two arguments - here, xs
is the absolute list of elements, and .
(which is bound to $x
) is what we want to look for.
OK, digression over. Clearly, given the relationship between inside
and contains
, the gotcha also applies to inside
.
To help me focus in on the significance of "if [...] elements [...] are contained in any element [...]" in the above description, I defined the two licensable vehicles as being "cart" (with a "t") and "truck" instead of "car" and "truck":
def licensable_vehicles: ["cart", "truck"];
I then recreated the function above to look like this*:
def needs_license_inside: [.] | inside(licensable_vehicles);
I then tested it with three vehicles, which gave me an unexpected result:
["bus", "cart", "car"] | map(needs_license_inside)
# => [false, true, true]
The inside
function returns true
for "car" ... because the string is contained in one of the elements ("cart"). We can even unpick the inverse, to get closer to the source of the problem:
licensable_vehicles | contains(["car"])
# => true
Yikes!
Rather than bemoan the slightly vague documentation combined with my misaligned expectations, I thought I'd look into how one might go about testing membership, if inside
(or contains
) is not the way.
The any filter has different forms (I guess known as any/0
, any/1
and any/2
).
We can use the any/1
form with a condition, like this:
def needs_license_any: . as $v | licensable_vehicles | any(.==$v);
This will give us what we're looking for:
["bus", "cart", "car"] | map(needs_license_any)
# => [false, true, false]
By the way, I had first created this version of the function as follows, and passed the vehicles under tests via a parameter:
def needs_license_any($v): licensable_vehicles | any(.==$v);
["bus", "cart", "car"] | map(needs_license_any(.))
# => [false, true, false]
But inspired by the builtin definition of inside I felt OK in using a symbolic binding (. as $v
) after all, despite what I mentioned earlier in the section on the Assembly Line exercise.
In the jq manual, index is described in a vague way, and the examples are quite minimal, which might give the impression it relates to strings and substrings. But I'm learning that the limited examples can be deceiving, and the functions and filters have subtle depths and for the most part just work the way you might assume they might, in different circumstances.
Here, index
will work for us in that it can return either an array index (for a given element, if it exists) or null (if it doesn't). A simple start with index
might look like this:
def needs_license_index: . as $v | licensable_vehicles | index($v);
However, this doesn't quite give us what we want:
["bus", "cart", "car"] | map(needs_license_index)
# => [null, 0, null]
But anding values such as these with true
does the trick, of course:
def needs_license_index:
. as $v | licensable_vehicles | index($v) and true;
["bus", "cart", "car"] | map(needs_license_index)
# => [false, true, false]
Note that in jq:
false and null are considered "false values", and anything else is a "true value"
which is why 0 and true
evaluates to true
.
I'm sure there are more options, but I'll leave it there for now. What is your goto approach for checking for elements in arrays? Let me know in the comments.
]]>Recently my good old friend Craig Cmehil posted a discussion over on the SAP Community: Did you know? It's been 20 years! What's your favourite community memory?. There are some great photos and memories being shared there, and it's definitely worth heading over there after reading this post and checking out the thread.
Twenty years - has it really been that long already? Well, it's actually been longer, but more on that shortly.
The SAP Community (capital C) is the latest name and incarnation of the web-based platform that was born in 2003, twenty years ago this year. Back then, it was launched as the "SAP Developer Network", which aligned sensibly with other similar initiatives that existed around that time, such as MSDN, the Microsoft Developer Network.
Later that decade, in 2007, the name was changed to "SAP Community Network", partly to acknowledge the presence of other practitioners and welcome them into the mix.
Most recently, in 2016, the name was changed again to simplify things, and became "SAP Community".
The launch of SAP Developer Network, which we now take for granted in the form of the SAP Community, was in the first half of 2003. It was the result of a lot of work from a small team working behind the scenes. I was a member of that team. How did it come to be so? Well, there were a number of factors, which I'll describe here.
Just over a year before the birth of the web-based platform that we now know and love as SAP Community, my first book "Programming Jabber" was published by O'Reilly (see the books section in my About Me page). O'Reilly was (and still is) a very well respected technical book publisher and also had some great experience with building community platforms on the Web back then.
My work writing Programming Jabber, and speaking at O'Reilly's annual Open Source Convention (OSCON) event, meant that I had a great relationship with the wonderful folks that worked there.
I started working with SAP software in 1987, with the mainframe version SAP R/2 (version 4.1d to be precise). So by the time SAP decided it was time to build and run a community of its own, I had already 15 years worth of relationships built up, and a reputation (mostly as a troublecauser, no doubt) based on my day job as an SAP basis person, developer, architect, consultant, and so on working at customers and partners and as an independent.
Moreover, I had been active in the community in the decade leading up to the birth of SAP Developer Network in 2003 too.
In the early days of my career, I was working as an employee of a small SAP consultancy and travelling around, spending most of my evenings in hotel rooms.
In early 1995 I created the first online SAP community. Back then, the Web wasn't what it is today; most Internet based communities were based around either Usenet (newsgroups) or mailing lists. Mailing list software was the norm for handling community discussions and interactivity, and I used Majordomo for the community I created, which was called "merlin", and was mainly for technical discussions and Q&A activities around both SAP R/2 and SAP R/3.
I spent pretty much every evening at the desk in my hotel room on my Sanyo NB 17 laptop (with a whopping 1MB of RAM and a 2400 baud modem), administering this community of like-minded folks who wanted to connect and exchange ideas and questions.
It was hard work, seemingly never ending, but very rewarding.
Later that year I got to know of another mailing list that had just formed, called sapr3-list. I reached out to the creator of that list, Bryan, and we proceeded to run our lists in parallel, exchanging stories of administrative issues and more.
Then a few months later, we were approached by some lovely folks from MIT, who were SAP customers, and who wanted to offer us help with our SAP community activities.
Sue Keohan was one of the folks that reached out, and between us, we formed a new single mailing list called SAP-R3-L that became the central SAP community that encompassed all the discussions, memberships and more of merlin and sapr3-list. This mailing list, by the way, was based on another piece of (now venerable) community mailing list software, LISTSERV.
The posting guidelines on our SAP-R3-L mailing list
With a rapidly growing number of community members, and multiple administrators able to deal with the discussions, the traffic, the issues and whatever else came up, the community blossomed further.
Hopefully that gives you a bit of context as to where things were when SAP decided to make its move. This was great news, and I got together with folks from O'Reilly and SAP to thrash out strategy, design, purpose and types of content that would make for a successful Web based community (because by this time mailing lists were less popular, and communities had started moving to the Web, so it made total sense).
We spent a few months working on this, and the result was launched in early 2003. It was super exciting, and I, along with a couple of others, including my old friend and colleague Piers Harding, had been busy creating technical articles to give the website some substance so we could launch with something that wasn't completely void of content.
There was a blogging system too, and we used that to express our thoughts and ideas from the start. I published my first blog post on the new website on 30 May 2003; this was the second blog post ever on SAP Community, the first being the inaugural one three days before that from Mark Finnern, who was at SAP and designated chief community herder and organiser on the new website.
So there you have it, my memories of the birth of the Web-based SAP Community. I'm proud to have played my part, and very happy to continue to do so in today's incarnation. The SAP Community flourishes because of the people, inside and outside of SAP. That's what a community is all about. And as long as it's about that, I think it will flourish for years to come.
In working through content on OData, it's hard not to notice the related topic of annotations. And I think it's fair to say that to some folks (myself included), annotations can seem somewhat mysterious, almost a dark art.
So I've written this deep dive post into exploring annotations, in CDS and in OData, and how they work together. It might help you better understand them, or at least feel more comfortable when you stare at them. I'd love to know what you think - please leave a comment at the bottom of this post.
There's a repository that was created to accompany the series of live streams: Back to basics: OData.
And to accompany this deep-dive post, we'll use a simple app. The specific app we'll use is the one that is built over the series of exercises in the (now archived) repo for the SAP CodeJam on CAP with Node.js.
The app is included in the Back to basics: OData repo, specifically in the bookshop directory in the annotations branch.
In particular, we will examine specific parts of the app, namely the annotations in the index.cds and service.cds files in the srv/ directory.
If you want to play along, get things set up and running first.
Clone the repository, switch to tracking the annotations branch, and move into the bookshop/
directory:
git clone https://github.com/SAP-samples/odata-basics-handsonsapdev/ \
&& cd odata-basics-handsonsapdev \
&& git checkout annotations \
&& cd bookshop
Then start the CAP server up:
cds run
You should see some output similar to this:
[cds] - loaded model from 4 file(s):
db/schema.cds
srv/index.cds
../../../home/user/.npm-global/lib/node_modules/@sap/cds-dk/node_modules/@sap/cds/common.cds
srv/service.cds
[cds] - connect to db > sqlite { url: 'db.sqlite', database: 'bookshop.db' }
Service name: CatalogService
Service name: Stats
[cds] - serving CatalogService { path: '/catalog', impl: 'srv/service.js' }
[cds] - serving Stats { path: '/stats', impl: 'srv/service.js' }
[cds] - server listening on { url: 'http://localhost:4004' }
[cds] - launched at 3/10/2023, 2:07:18 PM, version: 6.4.0, in: 536.41ms
[cds] - [ terminate with ^C ]
If so, you're all set!
This section is fairly long, and is a journey through annotations in the service layer of the CAP application in this directory. There are the following sections:
We start here with a brief introduction to annotations in CAP and CDS.
Then we look at a singular annotation in service.cds.
After that, we take a look at a more complex set of annotations in index.cds, with an exploration of OData annotation vocabularies in general, a specific look at one particular vocabulary (the UI annotation vocabulary), a brief overview of the syntax for annotations in CDS, a deep dive into annotation values (primitives, collections and records), how to express multiple annotations in CDS, rounding off this branch of exploration with an examination of annotation vocabulary references.
After that exploration of theory, we're ready for interpreting the annotation details, wherein we look at the DataFieldAbstract type, the UI.Identification term, the UI.LineItem term, the UI.SelectionFields term and the UI.HeaderInfo term, all of which are used in the annotations in index.cds
.
Finally, we turn to the OData metadata document, and take some time examining the OData annotations in EDMX, paying close attention to namespace references and then annotation targets, before moving on to look at the actual EDMX generated for the annotations used - the UI.Identification annotation, the UI.SelectionFields annotation, the UI.LineItem annotation and the UI.HeaderInfo annotation. To put everything into context, the CatalogService's metadata is shown in its entirety.
Here's a brief overview of the annotations used in the service layer, which is part of this app. Note that the word "annotation" is used in two different contexts here:
@
symbol$metadata
document)When used in an OData context (i.e. when describing an OData service in CDS) the CAP annotations will result in valid OData annotations. These annotations will belong to either standard OData vocabularies, or SAP specific vocabularies.
Note that "A service MUST NOT require the client to understand custom annotations in order to accurately interpret a response" (see the Vocabulary Extensibility section of OData Version 4.0. Part 1: Protocol Plus Errata 03). In other words, beyond annotations in the "Core" vocabulary, think of further annotations as suggestions.
CDS annotation: @readonly
Used at the entity level, this CDS annotation generates specific terms in the OData "Capabilities" vocabulary.
Specifically, this line:
@readonly entity OrderInfo as projection on my.Orders ...
causes these OData annotation terms to be generated and included in the service metadata document: DeleteRestrictions
, InsertRestrictions
and UpdateRestrictions
.
You can see this for yourself using the cds
command line tool to generate EDMX for the Stats
service defined within the srv/service.cds file, which looks like this:
using my.bookshop as my from '../db/schema';
// ...
service Stats {
@readonly entity OrderInfo as projection on my.Orders excluding {
createdAt,
createdBy,
modifiedAt,
modifiedBy,
book,
country
}
}
This is how you do it:
cds compile srv --service Stats --to edmx-v4
This produces the following output - note the <Annotations>
element:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml">
<edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="Stats" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="OrderInfo" EntityType="Stats.OrderInfo"/>
</EntityContainer>
<EntityType Name="OrderInfo">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Guid" Nullable="false"/>
<Property Name="quantity" Type="Edm.Int32"/>
</EntityType>
<Annotations Target="Stats.EntityContainer/OrderInfo">
<Annotation Term="Capabilities.DeleteRestrictions">
<Record Type="Capabilities.DeleteRestrictionsType">
<PropertyValue Property="Deletable" Bool="false"/>
</Record>
</Annotation>
<Annotation Term="Capabilities.InsertRestrictions">
<Record Type="Capabilities.InsertRestrictionsType">
<PropertyValue Property="Insertable" Bool="false"/>
</Record>
</Annotation>
<Annotation Term="Capabilities.UpdateRestrictions">
<Record Type="Capabilities.UpdateRestrictionsType">
<PropertyValue Property="Updatable" Bool="false"/>
</Record>
</Annotation>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
These annotation terms basically say - to those consuming apps that can interpret them - that delete, insert or update operations may not be performed on the OrderInfo
entity.
In case you're wondering - these restrictions that are imposed via the @readonly
decoration in the CDS definition are actually implemented in CAP.
Assuming that the service is running (with cds run
) you can try this yourself, like this:
curl
--silent \
--header 'Content-Type: application/json' \
--include \
--data '{"quantity": 10}' \
--url 'http://localhost:4004/stats/OrderInfo'
This produces the following:
HTTP/1.1 405 Method Not Allowed
X-Powered-By: Express
x-correlation-id: 3a80f986-2acd-4663-8116-d9b39d532f31
OData-Version: 4.0
content-type: application/json;odata.metadata=minimal
Date: Thu, 07 Jul 2022 10:57:04 GMT
Connection: keep-alive
Keep-Alive: timeout=5
Content-Length: 104
{"error":{"code":"405","message":"Entity \"Stats.OrderInfo\" is read-only","@Common.numericSeverity":4}}
Nice!
In this file, srv/index.cds, you can see the explicit annotate directive in action. This is contrast to the previous example, where the @readonly
annotation was specified directly with the definition of what was being annotated.
(There's a parallel here to a feature of OData annotations, and how they're served. In a similar way to how annotations in CDS can be either alongside, or separate from, the data definitions they're describing, so also can OData annotations be served in the same EDMX document (the OData service's metadata document) or as a separate resource. Not anything earth shatteringly important, but worth mentioning here.)
annotate CatalogService.Books with @(
UI: {
Identification: [ {Value: title} ],
SelectionFields: [ title ],
LineItem: [
{Value: ID},
{Value: title},
{Value: author.name},
{Value: author_ID},
{Value: stock}
],
HeaderInfo: {
TypeName: '{i18n>Book}',
TypeNamePlural: '{i18n>Books}',
Title: {Value: title},
Description: {Value: author.name}
}
}
);
This example is considerably more involved than the @readonly
example previously. Let's take it bit by bit. You may also want to refer to the OData Annotations section of the CAP documentation.
First, let's consider the simple and single word "readonly", and then what appears to be words ("UI", "Identification", "LineItem", "Value", etc) within a wider syntactical structure in this larger example.
The previous @readonly
example was a CDS annotation that resulted in the generation of multiple OData annotations.
In this current example, what we're looking at are annotations that are closer to the direct use of the combination of the OData annotation concepts of "vocabulary" and "term". To understand this better, let's start by taking a step back, and staring at the OData annotation vocabularies for a few minutes.
Put simply, OData annotations are expressed in the form of terms, which are grouped together into vocabularies.
The standards document OData Vocabularies Version 4.0 Committee Specification / Public Review Draft 01 outlines six vocabularies as follows (the summary document OData specs provides some information on the different document stages such as "Committee Specification" and "Public Review"):
Vocabulary | Namespace | Description |
---|---|---|
Core | Org.OData.Core.V1 | Terms describing behavioral aspects along with annotation terms that can be used to define other vocabularies (yes, meta all the things!) |
Capabilities | Org.OData.Capabilities.V1 | Terms that provide a way for service authors to describe certain capabilities of an OData Service |
Measures | Org.OData.Measures.V1 | Terms describing monetary amounts and measured quantities |
Validation | Org.OData.Validation.V1 | Terms describing validation rules |
Aggregation | Org.OData.Aggregation.V1 | Terms describing which data in a given entity model can be aggregated, and how |
Authorization | Org.OData.Authorization.V1 | Terms describing a web authorization flow |
If you like rabbit holes, note that all the vocabularies are described in machine-readable format ... using terms in the Core vocabulary. Even the Core vocabulary itself. Don't forget to come back once you've explored!
In the Introduction section of the standards document, it says that "Other OData vocabularies may be created, shared, and maintained outside of this work product".
And so there are other OData annotation vocabularies, for different purposes. SAP has created some, and they are documented publicly in the SAP/odata-vocabularies repository on GitHub. Amongst the SAP vocabularies, there are ones called Analytics, Communication, DataIntegration and also one called Common which contains terms common for all SAP vocabularies.
Another one in that list from SAP is the UI vocabulary, containing terms relating to presenting data in user interfaces.
Staring at the table of Terms in this vocabulary (or any for that matter) will help us interpret the CDS in index.cds
we saw earlier, in other words, this:
annotate CatalogService.Books with @(...);
More specifically it will help us to interpret everything inside the @(...)
.
Looking at the contents of that table of terms, we see something like this (this excerpt shows just some of the many terms):
Term | Type | Description |
---|---|---|
HeaderInfo | HeaderInfoType? | Information for the header area of an entity representation. HeaderInfo is mandatory for main entity types of the model |
Identification | [DataFieldAbstract] | Collection of fields identifying the object |
Badge | BadgeType? | Information usually displayed in the form of a business card |
LineItem | [DataFieldAbstract] | Collection of data fields for representation in a table or list |
SelectionFields | [PropertyPath] | Properties that might be relevant for filtering a collection of entities of this type |
Note that there are terms, and there are types. A term has a value, which is of a certain type.
In the table we can recognize some of the content that we saw in index.cds as terms in this UI Vocabulary:
Identification
SelectionFields
LineItem
HeaderInfo
Note in each case, the type is a single (camelcased) word. The word may be wrapped in square brackets, which denotes a collection of values of that type.
In the table exerpt above, most of the single words are also hyperlinked. For example, following HeaderInfoType leads to a table of properties that belong to that type, i.e. properties that the type consists of - in other words, the type is a structure (called a record, or object, see later).
This is how the HeaderInfoType
type is described, in terms of the properties within:
Property | Type | Description |
---|---|---|
TypeName | String | Name of the main entity type |
TypeNamePlural | String | Plural form of the name of the main entity type |
Title | DataFieldAbstract? | Title, e.g. for overview pages This can be a DataField and any of its children, or a DataFieldForAnnotation targeting ConnectedFields. |
Description | DataFieldAbstract? | Description, e.g. for overview pages This can be a DataField and any of its children, or a DataFieldForAnnotation targeting ConnectedFields. |
Image (Experimental) | Stream? | Image for an instance of the entity type. If the property has a valid value, it can be used for the visualization of the instance. If it is not available or not valid the value of the property ImageUrl can be used instead. |
ImageUrl | URL? | Image URL for an instance of the entity type. If the property has a valid value, it can be used for the visualization of the instance. If it is not available or not valid the value of the property TypeImageUrl can be used instead. |
TypeImageUrl | URL? | Image URL for the entity type |
Initials (Experimental) | String? | Latin letters to be used in case no Image , ImageUrl , or TypeImageUrl is present |
With this knowledge, we can now understand, for example, that the value for the HeaderInfo
term is a record of properties including TypeName
, TypeNamePlural
, Title
and so on.
There's one term in the main table of terms excerpt that has a type that is not hyperlinked. The term is SelectionFields
and the type is PropertyPath
. That's because that type is not a structure, but a single, scalar thing (also called a primitive). This implies that the value for the SelectionFields
term is a collection of paths to properties.
If you're wondering about the
?
suffix on some of the types, ignore it for now - it doesn't help our understanding that we need here.
Another aspect that we need to consider when attempting to parse the annotations above, is CDS's annotation syntax. For any given term in a vocabulary, the annotation is written as follows:
@vocabulary.term
followed by the value for that annotation.
There are also qualified annotations of which you should be aware, but they're not in play in these examples.
Multiple annotations can be specified in one go by listing them one after another, or, more commonly, by listing them inside a @(...)
construct and separating them with commas. We can clearly see this in action in our index.cds example.
The final piece in the puzzle to understanding and interpreting annotation definitions and the EDMX content that is generated is the set of different value types for annotation terms. If you're familiar with the core value types in many programming languages, you'll be at home here. There are:
Value Type | Alternative Name | Example |
---|---|---|
Primitive | Scalar | a string, boolean value or number |
Record | Object | a collection of name value pairs like this: { name1: value1, name2: value2, ... } |
Collection | Array | a list of other types, either primitives or records, enclosed in [ ... ] |
Examples for each of these will help us to get a feel for their general shape.
For these examples, we'll use the most basic of service definitions in CDS, and annotate it as appropriate.
The base definition looks like this:
service Northwind {
entity Categories {
key ID: Integer;
description: String;
}
}
And the basic EDMX generated from this (see how we did it earlier with the cds compile
command) is as follows:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:DataServices>
<Schema Namespace="Northwind" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Categories" EntityType="Northwind.Categories"/>
</EntityContainer>
<EntityType Name="Categories">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="description" Type="Edm.String"/>
</EntityType>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
Note that there are no annotations in this EDMX yet.
Primitive example: vocabulary Core
, term Description
The Core vocabulary contains a number of primitive terms, one of which is Description. This has the type String
and itself is described as "A brief description of a model element".
For a brief look down the rabbit hole, take a look at the definitive description of the Core vocabulary terms, in Org.OData.Core.V1.xml, where the Core terms are defined, including this one:
<Term Name="Description" Type="Edm.String">
<Annotation Term="Core.Description" String="A brief description of a model element" />
<Annotation Term="Core.IsLanguageDependent" />
</Term>
Wait, what? Is the Core.description
term itself annotated ... with the Core.description
term? Yes. But let's pull ourselves back from the hole and continue with this example and our sanity (although if, like me, you do like to dive in, and are wondering how to annotate annotations in CDS, there's a section in the CAP documentation that covers that: Annotating annotations).
Let's annotate the Categories
entity type with this term (there are different ways to add annotations in CDS - refer to the CAP annotation syntax for more information):
This results in:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml">
<edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="Northwind" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Categories" EntityType="Northwind.Categories"/>
</EntityContainer>
<EntityType Name="Categories">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="description" Type="Edm.String"/>
</EntityType>
<Annotations Target="Northwind.Categories">
<Annotation Term="Core.description" String="The general type of product"/>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
Picking out the annotations here, we see this:
<Annotations Target="Northwind.Categories">
<Annotation Term="Core.description" String="The general type of product"/>
</Annotations>
Set within an <Annotations>
element based container that is used to identify the target of the annotations contained within, the single <Annotation>
element uses attributes to convey the term and the primitive value. Nice and simple.
Record example: vocabulary Capabilities
, term DeleteRestrictions
This is one we've seen before. The standard Capabilities vocabulary contains the DeleteRestrictions term, the value for which is a record, of type DeleteRestrictionsType.
The definitive definition of this can be found in Org.OData.Capabilities.V1.xml, as a combination of two things:
DeleteRestrictions
)DeleteRestrictionsType
)The term is defined thus:
<Term Name="DeleteRestrictions" Type="Capabilities.DeleteRestrictionsType" Nullable="false" AppliesTo="EntitySet Singleton Collection">
<Annotation Term="Core.AppliesViaContainer" />
<Annotation Term="Core.Description" String="Restrictions on delete operations" />
</Term>
The term itself is annotated with a couple of terms from the Core vocabulary too. But what's important here is that the type of the term. The type of the Description
term in the Core
vocabulary term's type is declared as Edm.String
:
<Term Name="Description" Type="Edm.String">
But for this Capabilities
vocabulary's DeleteRestrictions
term, the type is declared as Capabilities.DeleteRestrictionsType
. Moreover, this type definition comes next, in the form of a normal OData EDMX ComplexType
definition, something we'd see in other OData services, outside the context of just annotations, to describe thing such as cities or locations, like in the OData metadata document for the V4 sample OData service "TripPin":
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:DataServices>
<Schema Namespace="Microsoft.OData.SampleService.Models.TripPin" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<ComplexType Name="City">
<Property Name="CountryRegion" Type="Edm.String" Nullable="false"/>
<Property Name="Name" Type="Edm.String" Nullable="false"/>
<Property Name="Region" Type="Edm.String" Nullable="false"/>
</ComplexType>
<ComplexType Name="Location" OpenType="true">
<Property Name="Address" Type="Edm.String" Nullable="false"/>
<Property Name="City" Type="Microsoft.OData.SampleService.Models.TripPin.City" Nullable="false"/>
</ComplexType>
...
So record style annotation types are defined with the <ComplexType>
element, and this DeleteRestrictionsType
looks like this (to keep it brief, only a few properties are shown here):
<ComplexType Name="DeleteRestrictionsType">
<Property Name="Deletable" Type="Edm.Boolean" Nullable="false" DefaultValue="true">
<Annotation Term="Core.Description" String="Entities can be deleted" />
</Property>
<Property Name="NonDeletableNavigationProperties" Type="Collection(Edm.NavigationPropertyPath)" Nullable="false">
<Annotation Term="Core.Description" String="These navigation properties do not allow DeleteLink requests" />
</Property>
<Property Name="MaxLevels" Type="Edm.Int32" Nullable="false" DefaultValue="-1">
<Annotation Term="Core.Description" String="The maximum number of navigation properties that can be traversed when addressing the collection to delete from or the entity to delete. A value of -1 indicates there is no restriction." />
</Property>
</ComplexType>
Where have we seen this term in use before? In the EDMX generated from the @readonly
annotation in service.cds. Here's the relevant exerpt from the XML we saw earlier:
<Annotations Target="Stats.EntityContainer/OrderInfo">
<Annotation Term="Capabilities.DeleteRestrictions">
<Record Type="Capabilities.DeleteRestrictionsType">
<PropertyValue Property="Deletable" Bool="false"/>
</Record>
</Annotation>
...
</Annotations>
Having meditated a little on how these terms and types are defined, we can more comfortably approach the EDMX annotation content and pick out what's what. In this excerpt, we can now understand:
OrderInfo
entity set due to the value of the Target
attribute in the container <Annotations>
element<Annotation>
element<Annotation>
element contains a child <Record>
element<Record>
element is described by the type Capabilities.DeleteRestrictionsType
<PropertyValue>
element; attributes in this element convey the property (Deletable
) and the corresponding value (false
)Indeed, the content of the <PropertyValue>
element here makes sense to us now, because we've seen the appropriate definition in the <ComplexType>
where the DeleteRestrictionsType
is defined:
<Property Name="Deletable" Type="Edm.Boolean" Nullable="false" DefaultValue="true">
<Annotation Term="Core.Description" String="Entities can be deleted" />
</Property>
Remember that the CDS annotation used, @readonly
, is basically expanded into the appropriate terms. There's a section in the CAP documentation on Adding Fiori apps to projects that shows us what the actual equivalent of this shorthand @readonly
annotation is:
entity Categories @(Capabilities:{
InsertRestrictions.Insertable: false,
UpdateRestrictions.Updatable: false,
DeleteRestrictions.Deletable: false
}) {
...
}
(The other annotations here are also generated in the EDMX, but we've just focused on the Capabilities.DeleteRestrictions
term for now.)
We're getting closer to being fully comfortable with the CDS annotation constructs in index.cds. And in fact here we can see something that links where we are on the journey with what we saw back there. And that is the way that the actual Capabilities
terms, along with the values for the properties of the corresponding records, are expressed.
Consider that, in the context of a term that is described by a record type, we have three levels:
In the Capabilities
vocabulary, the DeleteRestrictions
term is described by the DeleteRestrictionsType
type, which contains a number of properties, one of which is Deletable
. This property is written in CDS annotation terms in a dotted notation, followed by a colon, and then the value
Capabilities.DeleteRestrictions.Deletable: false
This expression is not exactly what we see in the longhand equivalent of @readonly
above, but we can see that it works, by using it to annotate our test Categories
entity precisely:
service Northwind {
@Capabilities.DeleteRestrictions.Deletable: false
entity Categories {
key ID: Integer;
description: String;
}
}
This will cause the following to be generated:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml">
<edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="Northwind" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Categories" EntityType="Northwind.Categories"/>
</EntityContainer>
<EntityType Name="Categories">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="description" Type="Edm.String"/>
</EntityType>
<Annotations Target="Northwind.EntityContainer/Categories">
<Annotation Term="Capabilities.DeleteRestrictions">
<Record Type="Capabilities.DeleteRestrictionsType">
<PropertyValue Property="Deletable" Bool="false"/>
</Record>
</Annotation>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
If you're wondering about the lack of @(...)
in this example, rest assured, we'll get to it.
Collection example: vocabulary Capabilities
, term DeleteRestrictions
The last value type, collection, is used to express an array of values. Those values themselves can be primitive, or they can be records, which in turn contain further values. This is the same concept that can be found in data structures when programming or using declarative modeling in notations such as JSON. For example, a collection, or an array can contain a list of scalars:
[ 1, 2, 3 ]
Or it can contain more complex values such as objects; this is how JSON representations of OData entity set resources are typically expressed, such as this list of books from our running app, at the location http://localhost:4004/catalog/Books, specifically conveyed in the value
property here (which is a JSON array [...]
):
{
"value": [
{
"ID": 201,
"title": "Wuthering Heights",
"stock": 12,
"author_ID": 101
},
{
"ID": 207,
"title": "Jane Eyre",
"stock": 11,
"author_ID": 107
},
{
"ID": 251,
"title": "The Raven",
"stock": 333,
"author_ID": 150
}
]
}
Of course, in JSON and in some programming languages, these arrays can contain elements of different types, but in this context of annotation value types, the child elements will all be the same (scalars, objects, etc).
For an example of a collection value type, we'll turn to the SAP UI vocabulary, and specifically the SelectionFields
term, which has the following description: "Properties that might be relevant for filtering a collection of entities of this type". The term is described as having this type:
[PropertyPath]
The collection notation [...]
is reflected in the XML based definition of the vocabulary thus:
<Term Name="SelectionFields" Type="Collection(Edm.PropertyPath)" Nullable="false" AppliesTo="EntityType">
<Annotation Term="UI.ThingPerspective" />
<Annotation Term="Core.Description" String="Properties that might be relevant for filtering a collection of entities of this type" />
</Term>
While the previous vocabularies we've examined recently have been OASIS standard vocabularies with namespaces such as
Org.OData.Core.V1
andOrg.OData.Capabilities.V1
, this vocabulary from SAP has the namespacecom.sap.vocabularies.UI.v1
.
Again, note that this annotation term is itself annotated. But more importantly here note the term type is expressed as a Collection(...)
of the type Edm.PropertyPath
. This is the definitive evidence that the SelectionFields
term has a value which is a collection.
Why don't we take the example of the SelectionFields
term from in index.cds and apply it to our simple Categories
entity:
service Northwind {
@UI.SelectionFields: [ ID, description ]
entity Categories {
key ID: Integer;
description: String;
}
}
When compiled to EDMX, this is what we get:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/UI.xml">
<edmx:Include Alias="UI" Namespace="com.sap.vocabularies.UI.v1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="Northwind" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Categories" EntityType="Northwind.Categories"/>
</EntityContainer>
<EntityType Name="Categories">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="description" Type="Edm.String"/>
</EntityType>
<Annotations Target="Northwind.Categories">
<Annotation Term="UI.SelectionFields">
<Collection>
<PropertyPath>ID</PropertyPath>
<PropertyPath>description</PropertyPath>
</Collection>
</Annotation>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
The collection type is clearly comprehensible to us; even the name of the <PropertyPath>
elements that are therein contained are not unfamiliar now (remember, the type of the SelectionFields
was is described like this: [PropertyPath]
). There's nothing within the <Annotations>
element that is a mystery to us.
The examples so far have been single and separate. Using the Capabilities
vocabulary's DeleteRestrictions
term here, the Core
vocabulary's Description
term there, and the UI
vocabulary's SelectionFields
term yet somewhere else.
That's fine, and these can all be included together for an entity, as follows:
service Northwind {
@Core.Description: 'The general type of product'
@Capabilities.DeleteRestrictions.Deletable: false
@UI.SelectionFields: [ ID, title ]
entity Categories {
key ID: Integer;
description: String;
}
}
Often there's a need to use multiple annotations in the same vocabulary. And in order to avoid repeating the vocabulary name, the @(...)
construct can be used, in conjunction with curly braces. It might help to illustrate this first by considering an alternative (albeit extreme) way of expressing the DeleteRestrictions
annotation:
@(Capabilities: { DeleteRestrictions: { Deletable: false } } )
With extra whitespace, this looks like this:
@(
Capabilities: {
DeleteRestrictions: {
Deletable: false
}
}
)
The
@(...)
construct can also be used to group unrelated annotations too, if you wish.
Each "node" in the dotted hierarchy is exploded into a map (or object) of property and value pairs. Using this syntactical approach, it's easy to see the possibilities open up for expressing multiple terms in the same vocabulary. And this is exactly what's happening in index.cds, as we'll see.
annotate CatalogService.Books with @(
UI: {
Identification: [ {Value: title} ],
SelectionFields: [ title ],
LineItem: [
{Value: ID},
{Value: title},
{Value: author.name},
{Value: author_ID},
{Value: stock}
],
HeaderInfo: {
TypeName: '{i18n>Book}',
TypeNamePlural: '{i18n>Books}',
Title: {Value: title},
Description: {Value: author.name}
}
}
);
Before we leave this long but hopefully enlightening digression, there's one more thing to stare at in the annotation goodness that we find in the OData metadata documents, i.e. in the generated EDMX. For each of the primitive, record and collection examples, we've focused on the <Annotations>
element in the XML. But there are elements earlier on that are also related.
This is the EDMX from the primitive value example earlier:
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml">
<edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="Northwind" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Categories" EntityType="Northwind.Categories"/>
</EntityContainer>
<EntityType Name="Categories">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="description" Type="Edm.String"/>
</EntityType>
<Annotations Target="Northwind.Categories">
<Annotation Term="Core.description" String="The general type of product"/>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
The annotation itself is <Annotation Term="Core.description" String="The general type of product"/>
.
In the EDMX, before the <DataServices>
section (which contains the <Schema>
which in turn contains the definitions of the annotations, entity sets, entity types, complex types and so on), there is a <edmx:Reference>
to the Core
vocabulary namespace.
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml">
<edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>
</edmx:Reference>
This qualifies the Core
vocabulary prefixes on the terms used, and includes the relevant vocabulary namespace Org.OData.Core.V1
and also the canonical URL where the definition can be found, i.e. https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml.
Now you're aware of these, you'll start to notice their existence, to pick them out of the XML noise at the start of the metadata documents.
OK, it's time to revisit the annotations in index.cds, examine them one by one, and make sure we understand what's generated in the EDMX, and why. Here are the annotations again:
annotate CatalogService.Books with @(
UI: {
Identification: [ {Value: title} ],
SelectionFields: [ title ],
LineItem: [
{Value: ID},
{Value: title},
{Value: author.name},
{Value: author_ID},
{Value: stock}
],
HeaderInfo: {
TypeName: '{i18n>Book}',
TypeNamePlural: '{i18n>Books}',
Title: {Value: title},
Description: {Value: author.name}
}
}
);
First, note that this is all of the contents of the srv/index.cds file. There are no entity type definitions in here. This is an example of keeping the annotations separate; not only via the annotate
directive, but also in a different file.
The entity type being annotated is Books
, within the CatalogService
service, i.e. this reference here in srv/service.cds:
using my.bookshop as my from '../db/schema';
service CatalogService {
entity Books as projection on my.Books;
...
}
The @(...)
construct is being used to group annotations together. In fact, staring at the structure within, we can see that all of the annotations here are terms from the UI
vocabulary, along with their types (from the UI Vocabulary resource):
Vocabulary | Term | Type |
---|---|---|
UI |
Identification |
[DataFieldAbstract] |
UI |
SelectionFields |
[PropertyPath] |
UI |
LineItem |
[DataFieldAbstract] |
UI |
HeaderInfo |
HeaderInfoType |
Both the Identification
and LineItem
terms have the same type, which is a collection of DataFieldAbstract building blocks. This building block is an abstract type (given its name, that's not a surprise to us) which has concrete instances. One concrete instance of this abstract type is DataField which is a record with five properties:
Property | Type | Description |
---|---|---|
Label |
String |
A short, human-readable text suitable for labels and captions in UIs |
Criticality |
CriticalityType |
Criticality of the data field value |
CriticalityRepresentation |
CriticalityRepresentationType |
Decides if criticality is visualized in addition by means of an icon |
IconUrl |
URL |
Optional icon |
Value |
Untyped |
The data field's value |
The Value
term is the only one that belongs to this concrete DataField
type, the rest are from the DataFieldAbstract
type. You can see this by examining the canonical machine-readable XML definition of the type, which looks like this:
<ComplexType Name="DataField" BaseType="UI.DataFieldAbstract">
<Annotation Term="Core.Description" String="A piece of data" />
<Property Name="Value" Type="Edm.Untyped" Nullable="false">
<Annotation Term="Core.Description" String="The data field's value" />
<Annotation Term="Validation.DerivedTypeConstraint">
<Collection>
<String>Edm.PrimitiveType</String>
<String>Collection(Edm.Binary)</String>
<String>Collection(Edm.Boolean)</String>
<String>Collection(Edm.Byte)</String>
<String>Collection(Edm.Date)</String>
<String>Collection(Edm.DateTimeOffset)</String>
<String>Collection(Edm.Decimal)</String>
<String>Collection(Edm.Double)</String>
<String>Collection(Edm.Duration)</String>
<String>Collection(Edm.Guid)</String>
<String>Collection(Edm.Int16)</String>
<String>Collection(Edm.Int32)</String>
<String>Collection(Edm.Int64)</String>
<String>Collection(Edm.SByte)</String>
<String>Collection(Edm.Single)</String>
<String>Collection(Edm.String)</String>
<String>Collection(Edm.TimeOfDay)</String>
</Collection>
</Annotation>
<Annotation Term="Core.IsLanguageDependent" />
</Property>
</ComplexType>
There's only a single Property
defined (which is Value
), with the rest coming from DataFieldAbstract
which is referenced via the BaseType
attribute in the <ComplexType>
element.
If you're wondering why the type is
DataFieldAbstract
and notDataField
, see this question and answer.
The Records section on the CAP documentation on OData annotations highlights this
DataFieldAbstract
type, pointing out its prominence and the behaviour of the compiler for annotations defined with terms that have this type; the generated EDMX will default to the concreteDataField
type (i.e.<Record Type="UI.DataField">...</Record>
) unless another is specified explicitly via the special$Type
property.
Now we know about the DataFieldAbstract
type and its concrete derivation DataField
that's being used here, we can more comfortably interpret the appearance of the two terms in the CDS annotations:
annotate CatalogService.Books with @(
UI: {
Identification: [ {Value: title} ],
LineItem: [
{Value: ID},
{Value: title},
{Value: author.name},
{Value: author_ID},
{Value: stock}
]
}
);
Every part of each of these annotations is now within our grasp. First, consider the syntax. This part:
@(
UI: {
Identification: [ {Value: title} ]
}
);
can be compressed thus:
@UI.Identfication: [ { Value: title } ]
It can't be compressed further; if we were to specify the following:
@UI.Identfication.Value: title
then the compiler would emit this:
[WARNING] In annotation translation: found complex type, but expected type 'Collection(UI.DataFieldAbstract)', target: Northwind.Categories, annotation: UI.Identification
because the type is a Collection of complex types (records), not a single complex type.
Anyway, what's being expressed here is that the entity type is to be "identified" by the title
property (of the Books
entity type).
You can see the EDMX result of this annotation in the corresponding XML in the service's metadata document.
The LineItem
term is very similar, except that there is more than one record given as the value. Again, the type of the term is [DataFieldAbstract]
, and what's being used is a collection of concrete DataField
instances, with a value specified for their Value
property. These values that are specified (ID
, title
, and so on) are properties in the model.
Note in passing that one of these model properties (author.name
) is via the Books
entity type relationship with the Authors
entity type, and another (author_ID) is a generated property from the use of the managed association to create that relationship.
You can see the EDMX result of this annotation in the corresponding XML in the service's metadata document.
The Fiori preview app shows us an example of how this annotation is used, to determine the columns in the list of books:
This has been covered earlier, and is (in this instance) a collection of (a single) primitive value, the title
property path. The annotation appears like this:
annotate CatalogService.Books with @(
UI: {
SelectionFields: [ title ]
}
);
but the annotation itself could be also be compressed like this:
annotate CatalogService.Books with @(
UI.SelectionFields: [ title ]
);
You can see the EDMX result of this annotation in the corresponding XML in the service's metadata document.
The Fiori preview app shows us an example of how this annotation is used, to determine which field(s) are exposed to allow filtering of books in the list:
Here's what this term looks like in isolation:
annotate CatalogService.Books with @(
UI: {
HeaderInfo: {
TypeName: '{i18n>Book}',
TypeNamePlural: '{i18n>Books}',
Title: {Value: title},
Description: {Value: author.name}
}
}
);
While not so much compressed, this could have equally been expressed as follows:
annotate CatalogService.Books with @(
UI.HeaderInfo.TypeName: '{i18n>Book}',
UI.HeaderInfo.TypeNamePlural: '{i18n>Books}',
UI.HeaderInfo.Title.Value: title,
UI.HeaderInfo.Description.Value: author.name
);
Rewriting this HeaderInfo
annotation term like this draws our attention to the subtle but significant difference in the curly braces used here.
For the TypeName
and TypeNamePlural
properties of the HeaderInfoType
type (see the HeaderInfoType reference that describes the HeaderInfo
term, the values are defined with the String
type.
And the string values are both references to internationalized string data, using the standard UI5 and CDS syntax for this:
{modelname>property}
In other words, the curly braces here are part of the syntax for specifying a model property in CDS. Inside (single-quoted) strings.
But the values for the Title
and Description
properties of the HeaderInfoType
type are not strings, but records. Complex types, in other words, via our friend DataFieldAbstract
. The description of these two properties in the HeaderInfoType reference states: "This can be a DataField and any of its children, or a DataFieldForAnnotation targeting ConnectedFields.". And just like before, the concrete type used here is DataField
, with a Value
property.
In other words, the curly braces in these two properties denote the DataField
type's record structure that contains the Value
property.
You can see the EDMX result of this annotation in the corresponding XML in the service's metadata document.
The Fiori preview app shows us an example of how this annotation is used, in two places: the plural of "Books" used in the list, and the singular "Book" used, plus the book title and author name for the header section of the detail page for a selected book:
With all this knowledge under your belt, the last thing to do in this journey of discovery is to revisit the OData service's metadata document (for CatalogService
, rather than Stats
) and stare at the EDMX, in particular, the annotation related XML. It should now be somewhat clearer, and hopefully you'll be able to read it more comfortably and with more confidence.
You should also more easily recognise the names of the XML elements in use, as they directly represent concepts we've looked at: Collection
, Record
, PropertyValue
, and so on.
Assuming the service is still running, open up http://localhost:4004/catalog/$metadata, take a deep breath, and dive in. The actual document content is at the end. Here are some reading notes on it.
At the top we see references to the namespaces corresponding to the annotation vocabularies used:
<edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/Common.xml">
<edmx:Include Alias="Common" Namespace="com.sap.vocabularies.Common.v1"/>
</edmx:Reference>
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml">
<edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>
</edmx:Reference>
<edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/UI.xml">
<edmx:Include Alias="UI" Namespace="com.sap.vocabularies.UI.v1"/>
</edmx:Reference>
The annotations themselves appear within the <Schema>
element, within multiple <Annotations>
elements. There are multiple elements because it's at this <Annotations>
element level that the target of the annotation(s) is specified, and there are multiple annotation targets.
Some targets are entity types, such as Books
:
<Annotations Target="CatalogService.Books">
...
</Annotations>
and Countries
:
<Annotations Target="CatalogService.Countries">
...
</Annotations>
Other targets are properties, such as the ID
property in Books
:
<Annotations Target="CatalogService.Books/ID">
...
</Annotations>
The annotation XML for this, applied to the Books
entity set target, is as follows:
<Annotation Term="UI.Identification">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
</Collection>
</Annotation>
The annotation XML for this, applied to the Books
entity set target, is as follows:
<Annotation Term="UI.SelectionFields">
<Collection>
<PropertyPath>title</PropertyPath>
</Collection>
</Annotation>
The annotation XML for this, applied to the Books
entity set target, is as follows:
<Annotation Term="UI.LineItem">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="ID"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author/name"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author_ID"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="stock"/>
</Record>
</Collection>
</Annotation>
Note that this XML is slighly larger as there are multiple records in the collection
The annotation XML for this, applied to the Books
entity set target, is as follows:
<Annotation Term="UI.HeaderInfo">
<Record Type="UI.HeaderInfoType">
<PropertyValue Property="TypeName" String="Book"/>
<PropertyValue Property="TypeNamePlural" String="Books"/>
<PropertyValue Property="Title">
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
</PropertyValue>
<PropertyValue Property="Description">
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author/name"/>
</Record>
</PropertyValue>
</Record>
</Annotation>
Here we can more plainly see the intermix of primitive values for TypeName
and TypeNamePlural
and records (complex types) for Title
and Description
.
Here's the entire document, in all its glory.
<?xml version="1.0" encoding="utf-8"?>
<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx">
<edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/Common.xml">
<edmx:Include Alias="Common" Namespace="com.sap.vocabularies.Common.v1"/>
</edmx:Reference>
<edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml">
<edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>
</edmx:Reference>
<edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/UI.xml">
<edmx:Include Alias="UI" Namespace="com.sap.vocabularies.UI.v1"/>
</edmx:Reference>
<edmx:DataServices>
<Schema Namespace="CatalogService" xmlns="http://docs.oasis-open.org/odata/ns/edm">
<EntityContainer Name="EntityContainer">
<EntitySet Name="Books" EntityType="CatalogService.Books">
<NavigationPropertyBinding Path="author" Target="Authors"/>
</EntitySet>
<EntitySet Name="Authors" EntityType="CatalogService.Authors">
<NavigationPropertyBinding Path="books" Target="Books"/>
</EntitySet>
<EntitySet Name="Orders" EntityType="CatalogService.Orders">
<NavigationPropertyBinding Path="book" Target="Books"/>
<NavigationPropertyBinding Path="country" Target="Countries"/>
</EntitySet>
<EntitySet Name="Countries" EntityType="CatalogService.Countries">
<NavigationPropertyBinding Path="texts" Target="Countries_texts"/>
<NavigationPropertyBinding Path="localized" Target="Countries_texts"/>
</EntitySet>
<EntitySet Name="Countries_texts" EntityType="CatalogService.Countries_texts"/>
</EntityContainer>
<EntityType Name="Books">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="title" Type="Edm.String"/>
<Property Name="stock" Type="Edm.Int32"/>
<NavigationProperty Name="author" Type="CatalogService.Authors" Partner="books">
<ReferentialConstraint Property="author_ID" ReferencedProperty="ID"/>
</NavigationProperty>
<Property Name="author_ID" Type="Edm.Int32"/>
</EntityType>
<EntityType Name="Authors">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Int32" Nullable="false"/>
<Property Name="name" Type="Edm.String"/>
<NavigationProperty Name="books" Type="Collection(CatalogService.Books)" Partner="author"/>
</EntityType>
<EntityType Name="Orders">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Type="Edm.Guid" Nullable="false"/>
<Property Name="createdAt" Type="Edm.DateTimeOffset" Precision="7"/>
<Property Name="createdBy" Type="Edm.String" MaxLength="255"/>
<Property Name="modifiedAt" Type="Edm.DateTimeOffset" Precision="7"/>
<Property Name="modifiedBy" Type="Edm.String" MaxLength="255"/>
<NavigationProperty Name="book" Type="CatalogService.Books">
<ReferentialConstraint Property="book_ID" ReferencedProperty="ID"/>
</NavigationProperty>
<Property Name="book_ID" Type="Edm.Int32"/>
<Property Name="quantity" Type="Edm.Int32"/>
<NavigationProperty Name="country" Type="CatalogService.Countries">
<ReferentialConstraint Property="country_code" ReferencedProperty="code"/>
</NavigationProperty>
<Property Name="country_code" Type="Edm.String" MaxLength="3"/>
</EntityType>
<EntityType Name="Countries">
<Key>
<PropertyRef Name="code"/>
</Key>
<Property Name="name" Type="Edm.String" MaxLength="255"/>
<Property Name="descr" Type="Edm.String" MaxLength="1000"/>
<Property Name="code" Type="Edm.String" MaxLength="3" Nullable="false"/>
<NavigationProperty Name="texts" Type="Collection(CatalogService.Countries_texts)">
<OnDelete Action="Cascade"/>
</NavigationProperty>
<NavigationProperty Name="localized" Type="CatalogService.Countries_texts">
<ReferentialConstraint Property="code" ReferencedProperty="code"/>
</NavigationProperty>
</EntityType>
<EntityType Name="Countries_texts">
<Key>
<PropertyRef Name="locale"/>
<PropertyRef Name="code"/>
</Key>
<Property Name="locale" Type="Edm.String" MaxLength="14" Nullable="false"/>
<Property Name="name" Type="Edm.String" MaxLength="255"/>
<Property Name="descr" Type="Edm.String" MaxLength="1000"/>
<Property Name="code" Type="Edm.String" MaxLength="3" Nullable="false"/>
</EntityType>
<Annotations Target="CatalogService.Books">
<Annotation Term="UI.Identification">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
</Collection>
</Annotation>
<Annotation Term="UI.SelectionFields">
<Collection>
<PropertyPath>title</PropertyPath>
</Collection>
</Annotation>
<Annotation Term="UI.LineItem">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="ID"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author/name"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author_ID"/>
</Record>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="stock"/>
</Record>
</Collection>
</Annotation>
<Annotation Term="UI.HeaderInfo">
<Record Type="UI.HeaderInfoType">
<PropertyValue Property="TypeName" String="Book"/>
<PropertyValue Property="TypeNamePlural" String="Books"/>
<PropertyValue Property="Title">
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="title"/>
</Record>
</PropertyValue>
<PropertyValue Property="Description">
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="author/name"/>
</Record>
</PropertyValue>
</Record>
</Annotation>
</Annotations>
<Annotations Target="CatalogService.Books/ID">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Common.Label" String="ID"/>
</Annotations>
<Annotations Target="CatalogService.Books/title">
<Annotation Term="Common.Label" String="Title"/>
</Annotations>
<Annotations Target="CatalogService.Books/stock">
<Annotation Term="Common.Label" String="Stock"/>
</Annotations>
<Annotations Target="CatalogService.Books/author">
<Annotation Term="Common.Label" String="AuthorID"/>
</Annotations>
<Annotations Target="CatalogService.Books/author_ID">
<Annotation Term="Common.Label" String="AuthorID"/>
</Annotations>
<Annotations Target="CatalogService.Authors/ID">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Common.Label" String="ID"/>
</Annotations>
<Annotations Target="CatalogService.Authors/name">
<Annotation Term="Common.Label" String="AuthorName"/>
</Annotations>
<Annotations Target="CatalogService.Orders/createdAt">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Core.Immutable" Bool="true"/>
<Annotation Term="Core.Computed" Bool="true"/>
<Annotation Term="Common.Label" String="Created On"/>
</Annotations>
<Annotations Target="CatalogService.Orders/createdBy">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Core.Immutable" Bool="true"/>
<Annotation Term="Core.Computed" Bool="true"/>
<Annotation Term="Core.Description" String="User's unique ID"/>
<Annotation Term="Common.Label" String="Created By"/>
</Annotations>
<Annotations Target="CatalogService.Orders/modifiedAt">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Core.Computed" Bool="true"/>
<Annotation Term="Common.Label" String="Changed On"/>
</Annotations>
<Annotations Target="CatalogService.Orders/modifiedBy">
<Annotation Term="UI.HiddenFilter" Bool="true"/>
<Annotation Term="Core.Computed" Bool="true"/>
<Annotation Term="Core.Description" String="User's unique ID"/>
<Annotation Term="Common.Label" String="Changed By"/>
</Annotations>
<Annotations Target="CatalogService.Orders/country">
<Annotation Term="Common.Label" String="Country"/>
<Annotation Term="Core.Description" String="Country code as specified by ISO 3166-1"/>
</Annotations>
<Annotations Target="CatalogService.Orders/country_code">
<Annotation Term="Common.Label" String="Country"/>
<Annotation Term="Common.ValueList">
<Record Type="Common.ValueListType">
<PropertyValue Property="Label" String="Country"/>
<PropertyValue Property="CollectionPath" String="Countries"/>
<PropertyValue Property="Parameters">
<Collection>
<Record Type="Common.ValueListParameterInOut">
<PropertyValue Property="LocalDataProperty" PropertyPath="country_code"/>
<PropertyValue Property="ValueListProperty" String="code"/>
</Record>
<Record Type="Common.ValueListParameterDisplayOnly">
<PropertyValue Property="ValueListProperty" String="name"/>
</Record>
</Collection>
</PropertyValue>
</Record>
</Annotation>
<Annotation Term="Core.Description" String="Country code as specified by ISO 3166-1"/>
</Annotations>
<Annotations Target="CatalogService.Countries">
<Annotation Term="UI.Identification">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="name"/>
</Record>
</Collection>
</Annotation>
</Annotations>
<Annotations Target="CatalogService.Countries/name">
<Annotation Term="Common.Label" String="Name"/>
</Annotations>
<Annotations Target="CatalogService.Countries/descr">
<Annotation Term="Common.Label" String="Description"/>
</Annotations>
<Annotations Target="CatalogService.Countries/code">
<Annotation Term="Common.Text" Path="name"/>
<Annotation Term="Common.Label" String="Country Code"/>
</Annotations>
<Annotations Target="CatalogService.Countries_texts/name">
<Annotation Term="Common.Label" String="Name"/>
</Annotations>
<Annotations Target="CatalogService.Countries_texts/descr">
<Annotation Term="Common.Label" String="Description"/>
</Annotations>
<Annotations Target="CatalogService.Countries_texts/code">
<Annotation Term="Common.Text" Path="name"/>
<Annotation Term="Common.Label" String="Country Code"/>
</Annotations>
</Schema>
</edmx:DataServices>
</edmx:Edmx>
There are some annotations in the EDMX that we've not mentioned here. But you should be able to work out what they are, where they came from, and understand what they are for. That is a task for you to complete on your own. A clue here is that they relate to built-in CAP features relating to Common Types & Aspects.
Good luck, and happy annotating!
]]>After a flight to Frankfurt and then a long and circuitous route to Heilbronn on the train, via Wiesbaden (!) and Mannheim (I think there were some engineering works going on) I set off from my hotel near Heilbronn station on the morning of the event, and crossed the Neckar river.
I found the venue easily, mostly because there was some lovely signage showing us the way.
I must start out by congratulating Marco Buescher, the host at Engineering ITS GmbH, for such great organisation. Just look at the setup that awaited us!
The partipants arrived (some from afar, including Prague!) and we quickly got going with the CodeJam content, installing and tinkering with the btp CLI, and getting to know it by setting up the autocomplete feature and then exploring various resources on the SAP Business Technology Platform (SAP BTP).
We then started digging into one of the btp CLI's killer features, the JSON output, and spent some time learning how to parse and manipulate that JSON properly.
After that deep dive, I think it's fair to say that any fear of understanding the complexity of the output structures was dispelled; even a couple of participants who weren't primarily developers told me that their confidence in requesting, handling and using complex structured content like this had grown significantly. In addition, the trepidation folks felt about what SAP BTP was, and whether they would grok it, dissolved into the ether.
That made me happy.
One of the cool things about the relationship between what we can do with the btp CLI, and with the platform's Core Services APIs, is that more often than not, the same mechanism is being used in the background. With that in mind, we transitioned smoothly from using the btp CLI on the command line, to gearing up to calling an API, and comparing the output.
The journey from a standing start to calling an API was done over the course of three exercises, and for that I make no apology. One of the great things about a CodeJam event is that as well as getting to know each other, the participants have the best chance of getting to know the subject at hand in a meaningful way; they have the time to move slowly over the surface area and dig in deep, building a solid understanding about the fundamental interconnectedness of all things (that's a nod to the fictional holistic detective Dirk Gently, by the way).
There are a lot of moving parts to understand, from service instances and bindings, to authentication servers, OAuth 2.0 grant types, token requests, construction of authentication headers, and lots more. By the end of the third of these three exercises I could sense the participants flexing their newly formed muscles in this area, and it was a delight to see.
We broke for a wonderful Italian lunch (the host company has its world headquarters in Rome, Italy) and it was then revealed that it was Marco's birthday that day!
After that we got going again on the remaining exercises, looking at automation and scripting. One of the things that everyone seemed to enjoy (as did the participants in Utrecht earlier in the month) was the flow; we all tacked an exercise, and then got together at the end of that exercise to discuss what we'd done, what we'd learned, and to talk about some of the (deliberately open-ended) questions that are at the end of each exercise.
This way, no-one gets left behind, and the discussion and break between each exercise allows for a more permanent embedding of the knowledge in the brain.
We finished off the day with a couple of exercises where the participants created new resources in their SAP BTP subaccount, using the btp CLI, and then cleaned up using the corresponding API. On each API call we examined the verbose HTTP output carefully, learning how to interpret it, how to spot issues, and how to deal with errors.
Overall, I think it's fair to say that we got a lot out of the day. The participants were great, coming with an open mind and a willingness to learn and be curious, which is all one can wish for.
Thanks again to Engineering ITS GmbH for hosting, to the participants for coming, to my lovely colleagues and helpers Dinah and Kevin, as well as a bonus visit from another colleague Marco H from a different team at SAP, and last but not least to Marco B for organising!
For more on CodeJams, have a look at the long list of upcoming CodeJam events and the topics currently on offer.
]]>Moving into 2023, my Developer Advocate teammates and I are looking forward to running more SAP CodeJam events. This was not the first SAP CodeJam event this year, Tom Jung ran one in January on the btp CLI and APIs. But it was the first SAP CodeJam on this specific topic.
If you're reading this it's quite possible you already know how fundamentally important CAP is (if not, may I recommend CAP is important because it's not important). Moreover, the breadth of application for CAP is enormous, and it makes sense to have some more focused subject matter based CAP CodeJams as well as more general ones.
So in addition to an SAP CodeJam covering CAP and SAP HANA Cloud (see this instance for example), we now have this SAP CodeJam which is designed to give you a friendly but deep dive introduction to key aspects of service integration with CAP. What service integration is, how it works, what CAP offers you as a developer, plus best practices and more, using a simple local service combined with a remote SAP S/4HANA Cloud service on the SAP API Business Hub. You can read more in the About this CodeJam section of the material repository.
So, how did yesterday's SAP CodeJam on Service Integration with SAP Cloud Application Programming Model go?
I've flown enough work air miles in the past few decades to last more than a lifetime, so I take the opportunity to travel by train whenever I can these days. It's not just better for the planet, it's better for my mental wellbeing too, and it's so much easier to work and relax.
So I took a first train from Manchester Piccadilly to London Euston, arriving in time to spend a pleasant hour's walk around my old stomping ground (I studied at University College London and lived and worked in London for years after too).
Then it was time to head to another central London train station, St Pancras, with its well-preserved gothic architecture.
It's here that the Eurostar service has its London terminus, so I headed through passport control and was soon boarded and on my way.
After spending the night in Rotterdam, I set off to Utrecht the next morning, where Wim Snoep kindly picked me up from Utrecht Leidsche Rijn train station.
We had a late breakfast at the INNOV8iON office, and in picking up sandwiches, they had ordered one specifically for me with an elongated bitterbal as they knew that was a favourite of mine. The day was off to a great start! :-)
I'd run an SAP CodeJam at INNOV8iON before, so I knew that all the host duties would be well taken care of by the lovely folks there, and I was right.
We were soon underway.
There was a warm welcome for me and all the participants, with a nice introduction from Wim as folks arrived and started to get to know each other.
Over the next 6 or so hours, we worked through the exercises, with the participants using their own laptops, with a development environment that was (for this particular SAP CodeJam) either a Dev Space in SAP Business Application Studio or VS Code with a dev container.
After each exercise we stepped away from the keyboards and got together for a 5 minute recap to work through some open ended questions relating to what we'd just done, and to discuss anything that came up, in that exercise.
Picture courtesy of Wim Snoep
This way we stayed on track together throughout the entire event, and everyone also had a chance to allow what they'd just worked through to bed down more firmly in the brain, before moving on.
After getting about two thirds of the way through the exercises, we took a well-earned break for some delicious food and drink, provided by our hosts, and took a chance to get to know each other better and talk about what we'd learned so far.
After that, we got back to it, completing the core part of the SAP CodeJam content (exercises 1 through 11), leaving the last bonus exercise for the participants to complete at home. This bonus exercise covers some CAP-based OData annotations for SAP Fiori elements; not directly part of the core topic, but certainly related and useful to have as reference.
After the main event had finished, I had the opportunity to join Wim and another participant Julian to record an episode of the HANA Cafe NL podcast, with Twan van den Broek. We talked about the day and it was a lot of fun, not to mention fascinating to see Twan's rather professional podcast recording equipment!
All in all it was a great day, made successful by the hosts at INNOV8iON and especially by all the participants who embraced the SAP CodeJam spirit, getting to know each other while working through the learning material together.
During a couple of the post-exercise recap sessions, we even had bonus information shared with the group by some of the participants. Now that's what I call collaborative learning!
]]>It's no secret that a narrowboat is smaller than the vast majority of land-based homes (or offices for that matter). In the first post in this series, I'm moving onto a narrowboat, I outlined some of the basic dimensions (57 feet long and 6 feet 10 inches wide) and shared a diagram of the narrowboat layout:
What's clear from the design is that given the outside space at the bow and the stern, the total internal cabin length is actually more like 42 feet, from the steps down into the galley from the double doors on the cruiser stern, all the way to the step up from the bedroom, through the double doors at the front, into the well deck at the bow (remember, each of the squares in the diagram represents 1 foot x 1 foot or 30 cm x 30 cm).
That's clearly a constraint that one cannot ignore. But it's not the most significant one. More importantly, constraints are not necessarily a bad thing anyway.
Let me pause here and dwell on something Igor Stravinsky said:
"The more constraints one imposes, the more one frees one's self. And the arbitrariness of the constraint serves only to obtain precision of execution."
This is a great way to look at constraints, especially in today's world of everything everywhere at any time. While I've lived in very comfortable properties, I've never really been one that has coveted the new thing, the better washing machine, another car, that kind of thing. Yes, I've probably bought too much tech equipment in the past, but I've been offloading a lot of that via eBay, Gumtree and the like. What Stravinsky might have had in mind was certainly unlikely to be related to household appliances or computers, but the idea of material possession does come into it, at least from my perspective.
I mean, I'm not going to turn into a hermit or anything like that but over the past few years I've been conscious of how I live.
Reducing one's reliance on material possessions is one thing (and a useful one given the prospect of moving onto a narrowboat) but the feeling of freedom, or at least the ability to escape from the hamster wheel of consumerism is very attractive.
Living day to day with far fewer items holds an appeal for me that is hard to put into words. It's not that I'm eschewing all luxuries, it's that I am (and have to be) very particular about the few that I can allow myself. One of these is a space for my coffee making equipment.
If you open the narrowboat design diagram in a new window to see it in its full size, you will see a worktop in the galley, numbered 18, with drawers below. That's where this coffee making equipment is going to live.
And while we're there, note the gas hob has just two burners. On many liveaboard narrowboats, there will be a full size cooker with four gas burners. While I've used all four burners while cooking for a load of dinner guests round at the house, that doesn't happen very often and as I'm going to be cooking just for myself for most of the time, two is all I need. What's more, there's the multifuel stove (numbered 24) that also will do perfect double duty for cooking as well as heating.
I want to write a separate post about what stove I chose, how I came to the decision and the factors I considered. For now, here's a picture of it:
It's from Chilli Penguin Stoves not too far away in Pwllheli, Wales, and is the Fat Penguin (Tall Order) model. As you can see, it has a decent oven and also the top of the stove acts as a hotplate too, for slow cooking, and brewing coffee in a moka pot.
This is in addition to the gas oven and grill I'll have in the galley below the two burner hob.
The hob, oven and grill all run on gas. So where's that from? If you look closely at the stern in the narrowboat design diagram, you'll see a couple of areas numbered 04. These are the stern gas lockers, and you can get a better idea of what they look like from these pictures.
In this first one, you can see inside them (they go deeper than they look):
In this one, you can see that their effective height from the stern floor is such that they make nice seats:
The gas is LPG and usually propane, and is most commonly found in 13kg canister sizes. The lockers are designed to take a 13kg canister each. You can read more about gas on board in this article Gas (LPG) On Narrowboats on The Fitout Pontoon's website.
So in a liveaboard situation, you're effectively off grid. And this is a constraint that's hard to ignore. Once you've got your two gas canisters, and you're underway on the cut, that's it. No unlimited supply. To be fair, even when you're in a marina, on a long term mooring, you'll still have to change the canisters when they're empty (this is why having two instead of one is a great idea).
Incidentally, David Johns, otherwise known as the person behind the YouTube channel CruisingTheCut has a video on fuel boats, those traders who ply the waterways carrying coal, diesel and LPG. They have schedules and you can buy from them as they come round your area. The video is A day in the life of a fuel boat on the UK canals.
Talking of being off grid, you're not only going to be carrying your entire gas supply, but also your water. Again, without going into too much detail right now, narrowboats have water tanks that hold water for your everyday use. As well as for drinking, it's for washing, showering, and the toilet (yes there are composting toilets but that's a subject for another time and I'm not going for one of those anyway, as I don't fancy the process of managing multiple stages of composted waste in bags on my narrowboat, thank you very much).
These tanks are often at the bow, under the well deck, and while there are different types (and some used to be part of the hull itself), modern narrowboats will often be fitted with tanks made of stainless steel, holding between 400 and 500 litres. Once that water's gone, it's gone. Another constraint, and this time perhaps an even more important one than the gas constraint.
There are water points that can be found everywhere along the canal network, so yes you can obviously fill your tank up again, but again, it's not like the cold water tap in a house, which has a magically endless supply. You have to use your water carefully and plan where and when you can get more.
You can see the space for the water tank in this picture; it will go directly behind the bow thruster tube (which will run between the two exit holes on each side) and sit underneath the well deck.
During the steelwork, Mark from the Fitout Pontoon contacted the team and got them to cut the exit holes as far forward as possible, to leave more room for a larger tank. Normally the exit holes would be a bit further back; in fact, you can see the original planned position for the bow thruster tube, indicated by the arrows marked on the base plate:
It's clear that a narrowboat life means an off grid life, for the most part. I say for the most part, as some folks live on their narrowboats which are permanently moored, and are thus also able to be permanently connected to "shoreline power" (i.e. a 230 volt AC supply).
But if you're not permanently moored, you have further constraints, including where your electricity comes from, and how much diesel you can store, and what you need to use it for. I want to cover power, engine and heating in separate posts, but here's a quick overview for now.
Diesel is used for propulsion (i.e. there's a diesel engine) and for heating. There's a diesel tank in the stern, built in to the shell. You can see where it is in this photo (marked with the red oblong) which also shows the diesel tank drain (the arrow on the right) and three pipes marked E, F and R.
Regarding those three pipes and their legends:
The phrase "leisure batteries" refers to the battery bank that is used to power everything on and inside the narrowboat. It is used to contrast with "starter battery" which is the one that is used to start the engine; they are usually on separate circuits so that even if you drain your leisure batteries through use of equipment on board, you can always start your engine (and then in turn recharge the drained batteries).
I can use coal and / or wood for the multifuel stove. But therein lies yet another constraint. Where do I keep it? There's a finite amount of space to store things. Often narrowboaters will store bags of coal on the roof (if space allows, and if you're not that bothered about the paintwork) or in the bow area, both in the well deck and also in what is still traditionally called the gas locker at the front.
You can see this bow gas locker in the narrowboat design diagram, specifically its lid, and you can see the (square) hole that the lid covers in the picture of the bow, above. You can get a decent amount of coal and wood in there, but the space is finite and there are other bulky things that need space in there - the bow thruster itself, a hosepipe reel (for bringing water from the water point taps and into the tank), and so on.
Again, embracing that constraint, using fuel wisely and conservatively, and planning where & when you're going to be able re-stock, is key.
This is such a huge subject I'm going to say very little at this stage, and leave the detail for another time. Suffice it to say that given that I'll be not only living aboard, but working aboard too (see Working from a narrowboat - Internet connectivity).
I'll be moored up and at my desk for large chunks of the day. At my desk I'll have a laptop, an external monitor, and the other usual devices associated with working (and live streaming from) home.
I'll also be running a fridge / freezer, lights, and so on. After consulting with Mark I made the decision to go for a system that is predominantly 230 volt (AC). In other words, I can use all normal appliances on board. But for that, I will need an inverter (which itself consumes a small amount of power to run) to turn the DC from the leisure battery bank to AC. And the charge in my batteries will definitely not be limitless!
So I will have to think about electricity a lot. How I use it, how I maintain the batteries and the rest of the system, and how I put charge back into the batteries.
That's a lot of constraints. Ones that I cannot easily work around, or avoid.
Nor do I want to. Living more frugally, more consciously, from an environment perspective, can only be a good thing. And the limitations and restrictions that come as a natural part of living in a small off grid space such as a narrowboat keeps me away from the dangers of that age old truism - the more you have, the more you want, and the further out of reach satisfaction becomes.
Age and life experience has made me realise that one of the most precious commodities is time. And the wonderful thing one comes to realise here is that amidst all the constraints that I've described here, one thing that is no more constrained than before, is time.
In fact, through that very increased and pronounced contrast, I'm hoping that I will come to value and enjoy time as an end in itself. On the canal, things move slowly. Very slowly. And I think that is reflected in how time will continue to be available in the same quantities as ever, despite everything else being less available, less copious, and far more immediately finite.
A possible net result of this can be a greater awareness and appreciation for the small things, for the simple things. And that's something I would cherish.
Next post in this series: Living on a narrowboat - the stove as the heart of the home.
]]>Since publishing the first post about my plans to live on and work from a narrowboat (see I'm moving onto a narrowboat), I've had some lovely comments and some great questions, thank you. One which came up a lot both on Twitter, from folks like Maffi, Joel and Sacha, and elsewhere, is: How do I get Internet connectivity? I'll try to answer that question in this post.
I work as a Developer Advocate for SAP, and am in the very fortunate position to work remotely. I've worked remotely (i.e. from home) for many, many years, for different companies; in fact, on reflection, my working life has been a balance of two extremes: the constant weekly travel of a contractor / consultant (there was a "peak" period of 7 years where I flew at least twice a week, and sometimes four times a week, every week, to different clients), and the calm and travel-free context of working from home.
I don't miss the travel at all. Not one bit. I've seen enough airports and economy airplane seats to last more than a lifetime.
Anyway, pretty much any remote work requires an Internet connection. So when I'm on the narrowboat, I'll need one too. While cable or FTTP is appealing, I don't think there's a cable long enough to make things work as I navigate the canal networks. So the solution needs to be a little more mobile than that.
I did a lot of research, and ended up going for a 4G/5G mobile data based solution. I've actually been using this solution for a while already, I'll explain shortly. Here's what that solution looks like.
First, I had to decide upon a provider for the 4G/5G mobile data connection. Reading the narrowboating forums and speaking to folks on the cut, the general consensus seemed to come down to a choice between a handful of providers (here in the UK): Three, EE and Vodafone. Each offer a broadband solution based on mobile data. All offer both 4G and 5G based options.
After considering all of them, I opted for Vodafone. Specifically, I went for their Data only SIM offering, specifically the unlimited data version. This offering includes the "fastest available" speed option, which basically means 5G as well as 4G.
There are plenty of articles out there that show comparisons between 4G and 5G (such as this one), some showing 2X speed increases, other showing a 6X speed increase, but the bottom line was that reaching the 5G speed nirvana for me is currently less of a critical matter, for these reasons:
Moreover, the rollout of the 5G mast network in the UK is not complete, certainly not in rural areas, whereas 4G coverage is pretty good. So striving for the ultimate Mbps values would not be worth the effort and expense, at least right now.
That said, I wasn't going to shy away from the fact that Vodafone's offering includes 5G, because then as soon as the prices for 5G devices come down, I'm ready.
When doing the deal with Vodafone, they offered me a 6 months discount, and I've effectively ended up with a 2 year contract where I'm paying around GBP 25.00 per month for all the data I can eat, at the maximum speed I can consume. Compared with my previous FTTP deal with BT (which of course is fibre and extremely fast, but I was paying a lot more), I decided I was happy to pay that amount.
There are two pieces of equipment that I purchased.
I opted for a pretty straightforward 4G+/LTE router, from Huawei. There are plenty of these about, and this one was a refurbished one, from Amazon, specifically the HUAWEI Unlocked Huawei B535-333 Soyealink, CAT7 400mbps 4G+ /LTE Home/Office Router, 1 x RJ11 Tel Port, Includes 2 x External Antennas, Supports VoIP - White (Renewed). There were a few essential aspects that this device had, which satisfied my requirements:
Yes, as a Latin scholar, I'm using "antennae" as the plural of "antenna".
This is not the highest quality router I've had, but it does the job. It works, the UI is serviceable, I can configure it how I want, mostly (trying to set a custom DNS server in the DHCP settings fails consistently, for example). It's not forever; as soon as similar devices that are 5G capable come out, at a more reasonable price, I'll get one of those, to enjoy higher speeds, but for now, this is fine.
From the perspective of receiving a signal, there are three options, in increasing effectiveness:
Given that a narrowboat is a long steel tube, an external antenna solution was going to be essential to enjoy the best speeds. A lot of outdoor antenna equipment uses SMA connectors and this is why such connectors were essential on the router I went for. While I'd likely get a 4G signal just from the router-internal antenna alone, it would be pretty weak and unreliable. Even adding the small bunny ears wouldn't make much difference (from what I've read on the forums).
So an external outdoor antenna solution, mounted on the narrowboat roof, was what I needed.
Roof-mounted antennae would not be subject to the constraints that antennae within the narrowboat would have. Choosing the right type was important, all the same. Some antennae are directional, meaning that to get the optimum signal reception, they need to be oriented and reoriented, to point to the nearest mast. If done properly and consistently, this works well, and directional antennae are often what are mounted on houses. Houses don't move, though, which means that once the orientation is done, the antennae can be left to function.
Narrowboats move. Adjusting the antennae on a narrowboat roof for every new location would get tiresome quickly. Luckily there are omnidirectional solutions, such as the Poynting 4G-XPOL-A0001 Cross Polarised 4G Omni LTE Antenna.
These are a common sight, mounted atop narrowboats on short (approx 40cm) vertical poles which have round magnetised bases, and the cables are fed down through into the cabin where they can be then attached to the back of the router. The reason for the magnetised bases is that if you navigate into a super low tunnel or under a very low bridge, and forget to clear the narrowboat roof beforehand, the antennae device and pole will simply be knocked over and rest on the roof, rather than sustain more significant damaged.
As the name of this one suggests, it is good for 4G bands; the frequency range supported is stated as being 790~960, 1710~2170, 2300~2400 and 2500~2700 MHz.
For only a few quid more, the Poynting XPOL-1 V2 5G 3dBi Omni-Directional Cross Polarised LTE 2x2 MIMO Outdoor Antenna provides the same function and covers the same frequency ranges as the 4G XPOL-A0001, but also covers the 5G frequency range (3400~3800 MHz).
So I went for this 5G version, meaning that the only component in the solution that wasn't 5G-ready or capable was the router, which I would replace when the prices drop. Here's what this XPOL-1 V2 5G antenna looks like - it's a similar size to the 4G-XPOL-A0001 and is mounted on the pole in the same way:
I'm currently living in a rented cottage, and have been "soak testing" this very setup for the past 6 months. It's been my only connection to the Internet. It's been great, and I'm more than happy with it.
I've been consistently getting anywhere between 10Mbps and 25Mbps (both down and up). Yes, many of my friends and colleagues are using FTTP these days and revelling in three-figure Mbps readings. But honestly, the speeds I'm getting work fine. More than fine. I've been live streaming on our Hands-on SAP Dev show, I've been in more Teams and Zoom based video conferences than you can shake a stick at, I watch YouTube and Amazon Prime movies in the evening on my Google TV dongle, and stream music from YouTube Music during the day too.
When I've prepared an item for our SAP Developer News show, where we first upload everything to a central server before editing everything together, I've had no problems either.
In short, the solution met and kept up with my Internet connectivity requirements for work and play since day one.
Here's a picture showing my current setup here in the cottage:
In the picture you can also see the "bunny ears" antennae on the windowsill, now redundant. You can also see the three LED lights on the right of the router showing a "full" signal. The external antennae device is secured to the window pane with suckers that came with it. The external antennae device is secured to the window pane with suckers that came with it.
(Since I took that picture, in the summer of 2022, I've moved the router to a shelf below the windowsill, where I also now have a Raspberry Pi connected to one of the ethernet ports. I'll cover that in a future post).
In designing the internal layout, working with Mark to achieve the optimum use of space, we ended up with an office area towards the centre of the cabin, which is the perfect size to fit my desk setup you saw near the start of this post. It's highlighted in red here (open the image in a new tab for better viewing):
Each of the squares in the narrowboat design image represents 1 square foot (30 cm square).
I've also highlighted in red where the router will be placed, which is in the electrics cupboard marked "16" near the stern, and where I'll have ethernet ports running from the back of the router: one behind the TV in the saloon (marked "22"), a couple in the office, and one in the bedroom, near the bow, on the shelves marked "56". Wired connections in general are better than wireless, and for the main devices in the narrowboat this makes a lot of sense.
In case you're wondering, the ethernet port in bedroom will be for Raspberry Pi based experiments. I may run a separate switch to the router so that I can provide power-over-ethernet (PoE) along these cables; I have PoE hats for my Pi devices.
As I don't have my narrowboat yet, my friend Sarah very kindly sent me some pictures of her similar setup on narrowboat Bright Arrow so I could show you what it actually looks like.
Here you can see her 4G-XPOL-A0001 and how it sits on the roof on the pole with that magnetised base:
In this one you can also see how the device is secured to the pole, and how the cables are fed into the cabin through sealed weatherproof connectors, the same ones that are used to feed in from the solar panels:
Next post in this series: Living on a narrowboat - embracing constraints.
]]>I made the explicit decision some time last year, but I think the decision itself was the culmination of a long time desire to live more simply, combined with the realisation that I'm not getting any younger. I have been intrigued by the tiny house movement in the past, but at the same time the lure of the canal network in the UK has been floating around my periphery for a good while.
For those of you not in the UK, the canals form the majority of the network of inland waterways that played a major role in the industrial revolution in the 19th century, being the main transport routes for goods between towns and cities. Around 2700 miles of canals in the system are connected and internavigable (there are more canals that are separate too); most of them in England and Wales. The majority of the canals on the network can accommodate boats up to around 70 feet (21m) long, and up to about 7 feet (2m) wide (hence the term "narrowboat").
While the original narrowboats -- the ones carrying goods during the industrial revolution -- were drawn by horses along the side of the canal, on the "towpath", most of today's narrowboats are powered either by diesel engines, but there's an increasing interest in hybrid and pure electric propulsion solutions too. And whereas only a small part of the original narrowboats were given over to accommodation (the vast majority of the space being for the cargo), today's narrowboats have all the amenities for living spread along their length - kitchen (galley), living room (saloon), bathroom and bedroom(s). In fact, modern narrowboats have more tech on them than can be found in most bricks-and-mortar houses.
You can find out more about traditional narrowboats in the article Evolution of the Narrow Boat, which includes some great pictures of the traditional designs too.
As I mentioned at the start of this post, I'm having a narrowboat built. There are many ways that folks move onto "the cut" (what folks call the canal, given that the canals were dug out, mostly by hand, from the landscape); buying second-hand, buying the (steel) shell and fitting it out yourself (these are often called "sailaways") or having the shell built and then fitted out professionally. This latter option comes in two forms, either "off the peg" (i.e. you choose a builder that offers a narrowboat size and fit-out in a layout and specification that suits you) or custom, where you get to specify everything from the length of the narrowboat, to the style (more on that shortly), and every detail of the internal layout.
While longer narrowboats offer more living space, there are some canals (or more specificially some locks) that can only accomodate certain narrowboat lengths. As a general rule, any narrowboat with a length of 57 feet or less can pass through all the locks, and therefore navigate the entire network.
After the length decision there's then the style to decide upon. Generally this comes down to three common variations, known as traditional ("trad"), semi-traditional ("semi-trad") and "cruiser". There are some styles that are a mixture of these, but it's easiest to think of just these three styles, and they all basically describe what the back ("stern") of the narrowboat looks like.
As its name implies, a trad most resembles the original narrowboat designs, where there's only a very small standing space at the helm (you steer from the stern), with as much space given over to covered accommodation as possible.
Then there's the semi-trad which looks from the side like a trad, but the rear-most section forms part of the outside of the narrowboat, i.e. there's more outside standing and sitting space for not only the person navigating but also other folks too.
Then there's the cruiser, which has a larger and more open stern, enough to put up a table and chairs when moored. What you gain in space outside is of course lost inside.
Different folks prefer different layouts and it's more or less just a matter of personal choice and use case requirements.
Of course, I'm massively oversimplifying the layouts here; there are also considerations to be made as to engine placement (and indeed traditional engines have their own "engine room" forward of the stern) but this should hopefully give you a general idea.
And while I'm oversimplifying, I will also say that generally there are only a handful of internal layouts, differing by the order in which the different accommodation sections appear along the length of the narrowboat. Traditionally, going from the stern (rear) to the bow (front), you'll get the bedroom first, then the bathroom, then galley and saloon. There's a layout that seems to be more popular these days, which is a "reverse", i.e. galley first, then saloon, then bathroom and finally bedroom at the bow.
I have been in the fortunate position of having the funds (from part of the proceeds of a house sale) to go for the custom option, where I've specified pretty much everything.
I wanted to go for a narrowboat that would not be network-restricted, i.e. of a maximum length of 57 feet and, as is typical, 6 feet 10 inches wide (yes, narrowboats are still measured in feet and inches, so I'll stick to those units).
Having spent time on cruiser style narrowboats and enjoyed the outside space very much, I decided upon a cruiser stern, with plenty of space at the back to put out a couple of camping chairs or similar, plus compartments to store gas canisters (for oven and hob cooking) and more.
For reference, the picture immediately below is that of Queenie, owned by Hester at Star Narrowboat Holidays, moored in Altrincham on the Bridgewater Canal. Queenie has a cruiser stern, and for scale, is 50 feet in length.
And internally I've gone for a reverse layout, with the steps at the rear leading down into the galley. This was also the layout on the narrowboats I've spent time on before (such as Queenie), and I think it's more practical on the whole, for example being able to pop down to get a cup of tea, or grab something, without having to go through the bedroom first (to be fair, many narrowboats that don't have the reverse internal layout are of the trad or semi-trad design, which features an engine and storage room at the very rear of the internal cabin area, so access to tools and so on is simpler than you might think).
Here's my narrowboat design (open the picture in a new browser window for a larger version):
My research and enquiries had me land at The Fitout Pontoon. And after a great initial phone call, I decided to go with them for the design, an independent step before committing to the next step. I worked with them to finalise a design and specification that I could then take to whatever boat builder I chose.
In the end, because my experience with them in design phase was so positive, I decided to engage them for the entire journey. They would take the design, and deliver the narrowboat according to the specifications. One of the deciding factors was that I got on really well with the proprietor and chief designer Mark, who lives aboard his own narrowboat with his wife and child, and has a huge wealth of experience, knowledge and well-based opinions on everything I could think of, and also on plenty of things I didn't even know that I didn't know.
I'd also already heard great things about them and had followed Chris Mears and his experiences with them on his YouTube channel In Slow Time, specifically in these two playlists:
That distinction afforded by the pair of playlists is an important one. Building narrowboat shells is a different set of skills to fitting out a narrowboat, and The Fitout Pontoon use JSR Boats, a well respected shell builder, to create the steel shell, which they then take and turn into a fully fitted out liveaboard narrowboat.
The steelwork is done. During the construction, which was at JSR Boats' facility near Northampton, I got a chance to visit and document the progress in photos. In addition, Mark kept me updated with further photos too.
Before I bring this first post to an end, I'll share some of those early pictures, from a couple of months ago in November 2022.
In this one I'm standing at the stern, and you can just see the beautiful "swim" near the bottom of the narrowboat (where the boat tapers to a narrower point), along which the water will flow and out the rear of which will be the propeller shaft and propeller.
I love this next shot because it really shows that it's hand built; I'm in awe of the skills involved in producing anything like this. Incidentally the steel specifications here are ones typically used for such builds, being "10,6,5,4". These are steel thicknesses in mm, and are for (in order) the base plate, the hull sides, the cabin sides and the roof.
In this third shot you can see the cabin ends, with a space for the double door from the front of the cabin into the bow ... more specifically into the area called the "well deck", underneath which will be the 450l fresh water tank.
In fact, the build has progressed beyond this, but I'll save more pictures and details for the next post.
If there's something you'd like to know, tell me in the comments and I'll try to answer it next time.
Next post in this series: Working from a narrowboat - Internet connectivity.
]]>Occasionally I browse the Newest 'jq' questions on Stack Overflow and try to gently expand my jq knowledge, or at least exercise my young jq muscles. This morning I came across this one: Jq extracting the name and the value of objects as an array. Sometimes the questions are hard, sometimes less so. This one didn't seem too difficult, so I thought I'd take a quick coffee break to see what I could come up with (the question had already been answered but I didn't look until later).
The OP had this JSON:
{
"filterFeatureGroup": {
"Hauttyp": [
"Normal"
],
"Deckkraft": [
"Mittlere Deckkraft"
],
"Grundfarbe": [
"Grau"
],
"Produkteigenschaften": [
"Vegan"
],
"Textur / Konsistenz / Applikation": [
"Stift"
]
}
}
and wanted to turn it into this:
[
"Hauttyp: Normal",
"Deckkraft: Mittlere Deckkraft",
"Grundfarbe: Grau",
"Produkteigenschaften: Vegan",
"Textur / Konsistenz / Applikation: Stift"
]
As a bonus, I learned that "Deckkraft" means opacity in German. I don't think I've ever seen that word before, or had occasion to use that concept in a conversation. I'm guessing that this data perhaps relates to make-up or something similar. Anyway.
In thinking about an approach for this data transformation, it struck me that the Perl adage There's more than one way to do it (often shortened to "TIMTOWDI" and pronounced "Tim Toady") is often at play with jq, too.
I fired up my favourite interactive jq explorer, ijq, and loaded the data. Clearly the first parts of the output strings were the keys within the object that was the value of the filterFeatureGroup
property, i.e. Hauttyp
, Deckkraft
, Grundfarbe
and so on. So my immediate approach was to look at them using keys:
.filterFeatureGroup | keys
[
"Deckkraft",
"Grundfarbe",
"Hauttyp",
"Produkteigenschaften",
"Textur / Konsistenz / Applikation"
]
This already looked quite close to the target output, so I forced my way forwards, pulling the values from the input that I had to squirrel away first via a symbolic binding to $x
:
.filterFeatureGroup as $x
| $x
| keys
| map("\(.): \($x[.][0])")
The string expression "..."
includes the string interpolation construct (\(...)
) to include the value of an expression.
This produced the right output:
[
"Deckkraft: Mittlere Deckkraft",
"Grundfarbe: Grau",
"Hauttyp: Normal",
"Produkteigenschaften: Vegan",
"Textur / Konsistenz / Applikation: Stift"
]
but felt a little cumbersome, and perhaps not idiomatic. Here are the problems I saw:
.filterFeatureGroup as $x | $x
) felt a little clunky$x[.][0]
bothered me a bitI noticed that the output required values that exist as property names in the input: Hauttyp
, Deckkraft
and other values. More generally, when that is the case (as now) -- when property names are "values" -- my jq "antennae" are directed towards the to_entries, from_entries, with_entries family.
These functions convert back and forth between objects and arrays of key/value pairs, and in particular, to_entries
will reshape an object so it's more straightforward programmatically to get at those property name values. Here's an example. If we have this input:
{
"name": "DJ Adams",
"website": "https://qmacro.org"
}
then passing this through to_entries
will produce this:
[
{
"key": "name",
"value": "DJ Adams"
},
{
"key": "website",
"value": "https://qmacro.org"
}
]
Now each of the property name values (name
and website
here) are addressable via a consistent property name key
, across the objects that represent each of the original property name and value pairs.
Applying to_entries
to the object which is the value of the filterFeatureGroup
property, like this:
.filterFeatureGroup
| to_entries
we get this:
[
{
"key": "Hauttyp",
"value": [
"Normal"
]
},
{
"key": "Deckkraft",
"value": [
"Mittlere Deckkraft"
]
},
{
"key": "Grundfarbe",
"value": [
"Grau"
]
},
{
"key": "Produkteigenschaften",
"value": [
"Vegan"
]
},
{
"key": "Textur / Konsistenz / Applikation",
"value": [
"Stift"
]
}
]
The data itself now feels a little more "pedestrian", perhaps, but it also feels a little easier to worth with because of that.
The subsequent approaches are all based on this initial reshaping of the data.
Given the ability to more easily and more directly (explicitly) access the first part of what's required in the output, I moved forward like this:
.filterFeatureGroup
| to_entries
| map([.key, .value[0]])
This produced the following, which feels a little closer:
[
[
"Hauttyp",
"Normal"
],
[
"Deckkraft",
"Mittlere Deckkraft"
],
[
"Grundfarbe",
"Grau"
],
[
"Produkteigenschaften",
"Vegan"
],
[
"Textur / Konsistenz / Applikation",
"Stift"
]
]
I could then just map over these inner arrays and use join to create a string from the values in them, which I did, like this:
.filterFeatureGroup
| to_entries
| map([.key, .value[0]])
| map(join(": "))
This produced the desired output:
[
"Hauttyp: Normal",
"Deckkraft: Mittlere Deckkraft",
"Grundfarbe: Grau",
"Produkteigenschaften: Vegan",
"Textur / Konsistenz / Applikation: Stift"
]
This approach felt a little better, not only because of the cleaner use of to_entries
but also because I wasn't constructing a string manually with string interpolation (instead, using join
with an array).
But there were a couple of new things that didn't feel quite right:
map
calls; this feels OK to some extent, expecially in the context of more literate (or explicit) chains of functions in Ramda's pipe or compose context (see ES6, reduce and pipe for an example) but perhaps it could be neater in jq[0]
to get the first (and only) values (such as Normal
and Grau
) out of each of the innermost arrays was OK but made me feel as though I could perhaps transform the input into something even cleaner and simpler earlier in the processTo address the point about the sequence of two map
calls, it was just a matter of rearranging the construction so that the call to join
was in the same loop, so it looked like this:
.filterFeatureGroup
| to_entries
| map([.key, .value[0]] | join(": "))
This produces the same output:
[
"Hauttyp: Normal",
"Deckkraft: Mittlere Deckkraft",
"Grundfarbe: Grau",
"Produkteigenschaften: Vegan",
"Textur / Konsistenz / Applikation: Stift"
]
After addressing the map
sequence issue, I was happy enough, but I wanted to go back to see if I could address the use of the [0]
array index, by simplifying the data earlier in the filter pipeline.
Examining the first entry in the now-simplified filterFeatureGroup
object, like this:
.filterFeatureGroup | to_entries | first
we get this:
{
"key": "Hauttyp",
"value": [
"Normal"
]
}
What we really want from this particular entry is just the Hauttyp
and Normal
strings (to become "Hauttyp: Normal"
).
There's a function called flatten which, according to the manual, operates on arrays and does what you sort of expect it to do (again, jusing Ramda's flatten as a reference). Given an array such as [1, [2, 3]]
, then flatten
will produce this: [1, 2, 3]
.
What the manual doesn't mention is that it also operates, in a sensible way, on objects. Given the object entry above, if we add flatten
to the filter pipeline, like this:
.filterFeatureGroup | to_entries | first | flatten
we get this:
[
"Hauttyp",
"Normal"
]
Nice! In a way, this for me feels like another philosopical approach that I also learned about in my Perl days (although it goes back way beyond that): Do What I Mean also known as "DWIM". Given the data context and what flatten
does in general, I'm not surprised at the result, and it's what I would want, or mean, when I invoke it on an object.
Given this, I can do away with a lot of the mechanics for extracting the values, and just write this:
.filterFeatureGroup
| to_entries
| map(flatten | join(": "))
I'm happy to report that this also produces the desired output:
[
"Hauttyp: Normal",
"Deckkraft: Mittlere Deckkraft",
"Grundfarbe: Grau",
"Produkteigenschaften: Vegan",
"Textur / Konsistenz / Applikation: Stift"
]
I think I like this approach the most.
Working through simple questions like this help me think about jq more, and as I do so, I learn to think more about data structures, which I did in Perl too, but I am learning also to think about how data structures change as they are sent through pipelines of filters.
Incidentally, the accepted answer is a combination of some of what I explored in this post:
.filterFeatureGroup | to_entries | map("\(.key): \(.value[0])")
Hopefully this has also helped you think a bit more about processing JSON with jq.
]]>jq
, and in Day 7: No Space Left On Device I think I need a way of appending values to arrays, which are themselves values of properties that I create on the fly. This may not turn out to be useful in the end, but I wanted to explore it (I was thinking I could store the list of files in a given directory like this).
See the update at the end of this post for a much neater approach.
The structure I had in mind is this (in pseudo-JSON):
{
"dirs": {
"a": [file1, file2, ...],
"b": [file3, ...]
...
},
...
}
Thing is, I need to create the contents of the object at dirs
as I go along. In other words, a
and b
don't necessarily exist at first.
The first time I need to create a new entry like this, it needs to be an array, with the entry as the first and only value:
{
"dirs": {
"a": [file1]
},
...
}
But subsequently I need to just append entries (such as file2
here) to the existing array:
{
"dirs": {
"a": [file1, file2]
},
...
}
The concept of autovivification came to mind; I first learned about this word and concept in my Perl days, and it's never left me (in fact a lot of of how I think in terms of complex data structures I learned back then).
Effectively I want to be able to push a new item, but make sure that the array exists first and create it if it doesn't. Investigating this led me to the family of path related functions path(path_expression)
, del(path_expression)
, getpath(PATHS)
, setpath(PATHS; VALUE)
and delpaths(PATHS)
.
Here's what I came up with, as a sort of "autovivification-push" (where the semantics of push are more from JavaScript's array.prototype.push():
def apush($pexp;$item):
setpath($pexp;(getpath($pexp) // []) + [$item])
;
Given that, then the following:
{
dirs: {
a: ["file1"]
}
}
| apush(path(.dirs.a);"file2")
| apush(path(.dirs.b);"file3")
| apush(path(.dirs.b);"file4")
produces this:
{
"dirs": {
"a": [
"file1",
"file2"
],
"b": [
"file3",
"file4"
]
}
}
The b
array is effectively autovivified when the first item (file3
) needs to be pushed.
Like I say, I may go off in another direction for this puzzle, but wanted to make a note of this apush
idea.
Holy bananas batman. Mattias Wadman just replied to me on Mastodon with a much neater alternative, one that I should have realised sooner:
{
dirs: {
a: ["file1"]
}
}
| .dirs.a += ["file2"]
| .dirs.b += ["file3"]
| .dirs.b += ["file4"]
This results in the same JSON as above. This is a much more precise approach, that also, now I see it, is clearly more idiomatic. I had seen the +=
operator in the manual (in the Arithmetic update-assignment section) but looking at the description, I had applied only a narrow part of my brain and not seen that it might be usable beyond arithmetic operations! Of course! Thanks Mattias.
I have a working list of blog posts, as issues in a GitHub repo (as a sort of temporary data store). Each issue has the blog post title as the issue title, and just the blog post URL in the issue body, like this:
I had retrieved the issue data as JSON like this:
gh issue list \
--limit 500 \
--label dj-adams-sap \
--json number,title,body \
> dj-adams-sap.json
Here's what the first and last couple of items in dj-adams-sap.json
look like (extracted with jq '.[:2] + .[-2:]' dj-adams-sap.json
):
[
{
"body": "https://blogs.sap.com/2018/03/26/monday-morning-thoughts-cloud-native/",
"number": 224,
"title": "Monday morning thoughts- cloud native"
},
{
"body": "https://blogs.sap.com/2018/03/31/scripting-the-workflow-api-with-bash-and-curl/",
"number": 223,
"title": "Scripting the Workflow API with bash and curl"
},
{
"body": "https://blogs.sap.com/2022/08/04/introducing-sap-codejam-btp-a-new-group-and-a-first-event/",
"number": 83,
"title": "Introducing “SAP CodeJam BTP” - a new group, and a first event"
},
{
"body": "https://blogs.sap.com/2022/10/06/devtoberfest-2022-week-2/",
"number": 82,
"title": "Devtoberfest 2022 Week 2"
}
]
The dates of the blog posts can be determined from the first part of the path info in the blog post URLs, clearly. So I decided to map over each object and add a new property postdate
which would be a YYYY-MM-DD
formatted string worked out from that data.
First, I decided to define a function to extract the date:
def date:
sub(
"^https.+?com/(?<yyyy>[0-9]{4})/(?<mm>[0-9]{2})/(?<dd>[0-9]{2})/.+$";
"\(.yyyy)-\(.mm)-\(.dd)"
);
This uses the sub function to perform a regexp based substitution, actually replacing the entire input string (the URL) with a new string made up from the capture groups defined.
These are named capture groups, here's one of them; this one matches 4 consecutive digits into a capture group named yyyy
:
(?[0-9]{4})
Looking at the argument supplied for the second parameter of sub/2
, the \( ... )
syntax is string interpolation), to have an expression (in this example it's .yyyy
, .mm
and .dd
) evaluated and expanded in a string.
With the date
function ready, I could then simply iterate over the items in the array, adding a new postdate
property to each object, with the value of whatever the date
function extracts from the item's .body
property:
map(. + { postdate: .body|date })
Based on the reduced data set above, this then produces:
[
{
"body": "https://blogs.sap.com/2018/03/26/monday-morning-thoughts-cloud-native/",
"number": 224,
"title": "Monday morning thoughts- cloud native",
"postdate": "2018-03-26"
},
{
"body": "https://blogs.sap.com/2018/03/31/scripting-the-workflow-api-with-bash-and-curl/",
"number": 223,
"title": "Scripting the Workflow API with bash and curl",
"postdate": "2018-03-31"
},
{
"body": "https://blogs.sap.com/2022/08/04/introducing-sap-codejam-btp-a-new-group-and-a-first-event/",
"number": 83,
"title": "Introducing “SAP CodeJam BTP” - a new group, and a first event",
"postdate": "2022-08-04"
},
{
"body": "https://blogs.sap.com/2019/10/06/devtoberfest-2022-week-2/",
"number": 82,
"title": "Devtoberfest 2022 Week 2",
"postdate": "2019-10-06"
}
]
Then it's just a simple case of using sort_by
(followed optionally by reverse
) to get the post date order I want:
map(. + { postdate: .body|date })
| sort_by(.postdate)
Of course, I could combine the two parts if I didn't want the postdate
property to be an explicit fixture in my downstream processing. Something like this:
sort_by(.body | date)
It did occur to me that given the pattern of blog post URLs, I could just sort by them directly. Then again, it wasn't as interesting and I didn't learn anything about named capture groups. Anyway, this post is mostly for me, for when my future self forgets how to use capture groups and the sub
function.
If you're not familiar with JSON Schema, there are some great introductory tutorials that I would recommend. The Slack channel is friendly and welcoming too.
OK, so I've been experimenting with how I might construct a schema in a modular way; this is as opposed to more monolithic ones which are arguably harder to read and manage. At least for me and my small brain.
I'm going to go for a deliberately contrived and boringly simple example, where I want to have JSON data sets that contain information about two types of things - people and vehicles.
A person has a first name and a last name (both are required). A vehicle has a make and a model (again, both are required).
A given JSON data set (represented by a single JSON file) can contain zero or more of these two types of things, where each thing is either a PERSON object or a VEHICLE object, represented by a category, contained within a "things" array, which is the single property of the outermost containing object. In other words, something like this:
{
"things": [
{
"category": "VEHICLE",
"make": "Chevrolet",
"model": "Caprice"
},
{
"lastName": "Adams",
"firstName": "DJ",
"category": "PERSON"
},
{
"make": "Tesla",
"model": "Model 3",
"category": "VEHICLE"
}
]
}
Let's imagine that this is in a file called data.json
.
What would a schema look like for this data set, and in particular, what might a modular schema look like?
I'll start with the outermost parts, and base the schema definition on draft 07. This is not the latest version of the JSON Schema specification but it's the one currently used to qualify the schemas in the BTP Setup Automator project so I'll go with that.
{
"$schema": "https://json-schema.org/draft-07/schema",
"description": "Things which are either people or vehicles",
"type": "object",
"required": [ "things" ],
"additionalProperties": false
}
Let's say this is in a file called myschema.json
.
So far so good. This says that the JSON should be an object with a single property things
, which is required. No other properties are allowed.
Interestingly, as it stands right now (deliberately cut short, unfinished), this schema represents a contradiction. If I construct some JSON that is governed by this schema, I'm damned if I do include a things
property and damned if I don't:
Just this:
{}
gives me an error: "Missing property: things".
But if I add this property:
{
"things": []
}
an error is also surfaced to me: "Property things is not allowed" (regardless of what type of value I specify for it).
It was not my intention to include this in my notes, but I just discovered it, and thought it worth sharing. It makes sense - the schema so far says:
"A things
property is required, no other properties are allowed, but there's no list of actual properties defined."
Even more interesting - if I remove the "additionalProperties": false
constraint, then while {}
still gives an error, { "things" [] }
does not, because the "no other properties are allowed" restriction is lifted.
Anyway.
So clearly it's time to extend the schema now to allow for the actual things
property, describing what it should be.
{
"$schema": "https://json-schema.org/draft-07/schema",
"description": "Things which are either people or vehicles",
"type": "object",
"required": [ "things" ],
"additionalProperties": false,
"properties": {
"things": {
"type": "array",
"items": {
"type": "object"
}
}
}
}
I've added a minimal definition of the things
property. It's an array of objects, that's about all this says so far.
But I need to constrain those object to reflect either a PERSON or a VEHICLE. And this is where I want to try out some modularisation.
The JSON Schema keyword oneOf seems ideal for this job. The description even uses phrases like "combining schemas from multiple files" and "the given data must be valid against exactly one of the given subschemas". This is exactly the sort of thing I had in mind, in that I want to think about the constraints for a PERSON, and the constraints for a VEHICLE, as separate modular subschemas.
I say "subschema" but want to emphasise that these subschemas are perfectly valid and independent schemas, they aren't only valid within the context of a referencing schema.
Perhaps I should try out a simple example of oneOf
first. In the Schema Composition section of Understanding JSON Schema the example looks like this (FizzBuzz, anyone?):
{
"oneOf": [
{ "type": "number", "multipleOf": 5 },
{ "type": "number", "multipleOf": 3 }
]
}
I'll insert that verbatim into the schema I have so far; to do that, I'll have to temporarily remove the "type": "object"
constraint from the items
property definition, as the type
(both number
) is defined in each of the two separate subschemas in this example (I'll come back to that in a minute, though):
{
"$schema": "https://json-schema.org/draft-07/schema",
"description": "Things which are either people or vehicles",
"type": "object",
"required": [
"things"
],
"additionalProperties": false,
"properties": {
"things": {
"type": "array",
"items": {
"oneOf": [
{ "type": "number", "multipleOf": 5 },
{ "type": "number", "multipleOf": 3 }
]
}
}
}
}
As expected, this will appropriately validate the following data:
{
"things": [1, 2, 3, 4, 5]
}
The values 1, 2 and 4 are marked in my editor as invalid (interestingly, but not completely unexpectedly, with errors relating to the first constraint in the list, namely "Value is not divisible by 5").
Just coming back to that type
definition for a second; instead of removing the "type": "object"
constraint to make way for the two type
definitions in what I was pasting in, I could have removed the two type
definitions in what I was pasting in, and floated the constraint one level up, changing the value for the type
property from "object"
to "number"
, like this:
"items": {
"type": "number",
"oneOf": [
{ "multipleOf": 5 },
{ "multipleOf": 3 }
]
}
This is much cleaner. But I digress (again).
First, what do I actually mean by modularisation? Well I want to have the definitions for PERSON and VEHICLE in separate files, each representing a subschema, and then I want to be able to point to those two subschema files in the context of this oneOf
section.
Why? Well, I feel as thought I'd be able to construct, think about and maintain schemas if they're smaller and self-contained, and then glue them together as I see fit.
If I take the PERSON definition, I could define a self-contained schema that might look like this:
{
"$schema": "https://json-schema.org/draft-07/schema",
"title": "Person schema",
"properties": {
"category": { "enum": [ "PERSON" ] },
"firstName": { "type": "string" },
"lastName": { "type": "string" }
},
"required": [ "firstName", "lastName", "category" ],
"additionalProperties": false
}
Similarly, here's a self-contained schema for VEHICLE:
{
"$schema": "https://json-schema.org/draft-07/schema",
"title": "Vehicle schema",
"properties": {
"category": { "enum": [ "VEHICLE" ] },
"make": { "type": "string", "enum": [ "Tesla", "Chevrolet" ] },
"model": { "type": "string" }
},
"required": [ "make", "model", "category" ],
"additionalProperties": false
}
I've added some vehicle manufacturer constraints for a bit of spice.
Each of these schemas is complete and self-contained, and each can be employed to validate data appropriately. But they can also be combined, as subschemas, with oneOf
, into a larger whole.
I've put each of these into files in a subdirectory called things/
, such that I now have this in my workspace:
.
|-- myschema.json
`-- things
|-- person.json
`-- vehicle.json
The combining can be achieved through the use of the $ref keyword.
So to reference these two self-contained schemas to describe what the items
can be, I can do this:
{
"$schema": "https://json-schema.org/draft-07/schema",
"description": "Things which are either people or vehicles",
"type": "object",
"required": [ "things" ],
"additionalProperties": false,
"properties": {
"things": {
"type": "array",
"items": {
"type": "object",
"oneOf": [
{ "$ref": "./things/person.json" },
{ "$ref": "./things/vehicle.json" }
]
}
}
}
}
For me, there's a bit of magic that makes this sort of construction really well. Looking back at the two subschema definitions, each of them defines a category
property, and in each case, only a single specific value for that property is valid. This constraint is achieved with the use of the enum keyword, where there's just a single value in the array of possible values.
Here's what the two definitions look like:
"category": { "enum": [ "PERSON" ] }
and
"category": { "enum": [ "VEHICLE" ] }
This means that for an item object to match the PERSON definition, the value of the category
property in that object must be "PERSON". Likewise, for an item object to match the VEHICLE definition, the value of the category
property in that object must be "VEHICLE".
This then has effects that are great for validation, and best illustrated by us imagining the creation of new item
objects. Let's play a couple of examples out.
First, I'll add a vehicle object.
In the data.json
file, in the array that is the value for the things
property, I create a new empty object {}
:
{
"things": [
{}
]
}
This assumes that my editor has associated this
data.json
file with the JSON Schema inmyschema.json
. I'll talk about how this is done in another post.
On entering {}
I immediately get a message: "Matches multiple schemas when only one must validate". Fair enough. Not enough data to go on yet.
I ask for suggestions and am presented with a combination of all the properties from the VEHICLE and PERSON schemas, i.e.:
category
firstName
lastName
make
model
Also fair. I'm still at the fork in the road.
I choose make
, and request autocomplete, and then I'm presented with the following possible values:
Makes sense, these are the string values in the enum
defined for that property, in the VEHICLE subschema. So I select "Chevrolet", and ask for suggestions for the next property. This time I'm just presented with two:
category
make
This also makes sense - given that the object now has a model
property, it will only match with the VEHICLE schema.
I choose category
and request autocomplete. This time the value is automatically filled for me, it's "VEHICLE". It can only be that value, and there were no other values suggested.
At this stage my object looks like this:
{
"things": [
{ "make": "Chevrolet", "category": "VEHICLE" }
]
}
There's still an error showing, and this time it's "Missing property model". Of course. The validation mechanism has matched the VEHICLE subschema, and according to the required
property in that subschema ...
{
"required": [ "make", "model", "category" ]
}
... the model
property is also required.
So I add one (autocomplete has this property as its only suggestion anyway) and specify "Caprice" as the value.
I end up with my first thing, and everything is valid:
{
"things": [
{
"make": "Chevrolet",
"category": "VEHICLE",
"model": "Caprice"
}
]
}
For a second example, I'll add a person object.
I start out the same way as before, by adding a new {}
empty object, and then select category
as the property I want to create first:
{
"things": [
{
"make": "Chevrolet",
"category": "VEHICLE",
"model": "Caprice"
},
{ "category": ... }
]
}
The autocomplete automatically suggests "PERSON" as the value. This strikes me as slightly odd, and perhaps a foible of the implementation of autocomplete in the editor I'm using. Because at this stage the choice of subschemas is still open, right?
I guess it's sort of understandable (almost?) in that if I've asked it to suggest a value, then it needs to suggest one, and picks the first possibility, which due to the simple fact that the PERSON subschema is listed first in the oneOf
array ...
"oneOf": [
{ "$ref": "./things/person.json" },
{ "$ref": "./things/vehicle.json" }
]
... is "PERSON". Makes sense, sort of.
So anyway I remove the suggested "PERSON" value and ask for suggestions again; this time, it gives me a choice of the two actual possibilities: "PERSON" and "VEHICLE". I select "PERSON" anyway, but am happy to have seen that both were presented as options.
I then proceed to ask for and then select the only two remaining possible properties which are firstName
and lastName
because the choice of "PERSON" for category
has locked this object into being constrained by the corresponding subschema, add the values, and end up with:
{
"things": [
{
"make": "Chevrolet",
"category": "VEHICLE",
"model": "Caprice"
},
{
"category": "PERSON",
"firstName": "Arthur",
"lastName": "Dent"
}
]
}
I've put the schema / subschemas combination through its paces and am happy with the result - it's what I'd expect (modulo the questionable editor behaviour mentioned) from the overall schema.
One of the reasons I write rambling notes to myself (and to you, dear reader, if - by this point in the post - you're still here) is that at the end of it, my own understanding is better. Not only that, in looking up stuff that I can reference, I learn new things.
An example of this is that in looking up the content related to enumerated values I noticed, directly in the next section, that JSON Schema also has constant values! They're new from draft 06, so are fine for me to use.
This is a new discovery for me, and I can replace the magic earlier - enums
with single values - with this const
keyword.
Taking the VEHICLE subschema as an example, here's what it looks like now:
{
"$schema": "https://json-schema.org/draft-07/schema",
"title": "Vehicle schema",
"properties": {
"category": { "const": "VEHICLE" },
"make": { "type": "string", "enum": [ "Tesla", "Chevrolet" ] },
"model": { "type": "string" }
},
"required": [ "make", "model", "category" ],
"additionalProperties": false
}
Using const
makes more sense, and is more explicit. Nice!
I think the possibilities of managing schemas in a modular way are definitely there, and this brief foray into that area of JSON Schema has taught me a thing or two. I hope it has helped you become acquainted too.
Here's a pic of my 1973 Chevrolet Caprice Classic Coupe which I had in the 1990's. Long gone, never forgotten.
Part 2 finished with an array of category objects, each containing all the checkin ratings for that category, albeit in string form, with some empty strings:
[
{
"key": "Altbier",
"value": [
"4",
"3",
"3.75",
"3.5",
"3.25"
]
},
{
"key": "...",
"value": [
"...",
"..."
]
},
{
"key": "Winter Warmer",
"value": [
"",
"",
"4",
"4",
"4",
"3.5",
"4",
"4.25",
"3.25",
"4.25",
"3.75",
"3.4"
]
}
]
This was achieved using a pattern now encapsulated into a function called arrange
:
def major_type: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
map({ category: .beer_type|major_type, rating_score })
| arrange("category"; map(.rating_score))
So, about those rating values. I'll take the ratings for the Winter Warmer category as an example to work on, and I can get a list of those by extending the current filter like this:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
map({ category: .beer_type|major_type, rating_score })
| arrange("category"; map(.rating_score))
# Temporary selection of Winter Warmer ratings
| map(select(.key == "Winter Warmer"))|first|.value
I've deliberately put some whitespace (and a comment) before this temporary extension, to make it clear it's not permanent.
The output looks like this:
[
"",
"",
"4",
"4",
"4",
"3.5",
"4",
"4.25",
"3.25",
"4.25",
"3.75",
"3.4"
]
OK, so it seems worthwhile building something to filter these values down to ones that are not null and to turn them from strings to numbers. While there isn't an explicit filter
function, it's achieved by the combination of map
and select
, which is very common to see. In fact, I use it in this temporary extension: map(select(.key == "Winter Warmer"))
.
To be honest, I've often wondered why a simple syntactic sugar function isn't in the builtin library, something like this:
def filter(f): map(select(f));
Then I could have expressed the above section like this:
filter(.key == "Winter Warmer")
.
Anyway, to the data. Filtering out anything except actual values could be done like this:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
map({ category: .beer_type|major_type, rating_score })
| arrange("category"; map(.rating_score))
# Temporary selection of Winter Warmer ratings
| map(select(.key == "Winter Warmer"))|first|.value
| map(select(length > 0))
Which reduces the array of values appropriately:
[
"4",
"4",
"4",
"3.5",
"4",
"4.25",
"3.25",
"4.25",
"3.75",
"3.4"
]
And conveniently, there's a function to parse input as a number, appropriately called tonumber
(there's also tostring
). Adding that to this filter like this:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
map({ category: .beer_type|major_type, rating_score })
| arrange("category"; map(.rating_score))
# Temporary selection of Winter Warmer ratings
| map(select(.key == "Winter Warmer"))|first|.value
| map(select(length > 0)|tonumber)
gives us:
[
4,
4,
4,
3.5,
4,
4.25,
3.25,
4.25,
3.75,
3.4
]
That's what we want! Worth putting into a function, don't you agree? How about calling that function numbers
, and then using it in our temporary "Winter Warmer" extension:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
def numbers: (map(select(length > 0)|tonumber));
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score))
# Temporary selection of Winter Warmer ratings
| map(select(.key == "Winter Warmer"))|first|.value
| numbers
While I'm in the mood for functions, how about one that will give the average of an array of numbers? I'll call it average
and add it to untappd.jq
:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
def numbers: (map(select(length > 0)|tonumber));
def average: (add / length) * 10 | floor / 10;
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score))
# Temporary selection of Winter Warmer ratings
| map(select(.key == "Winter Warmer"))|first|.value
| numbers
| average
I added some numeric fettling to the average
function to ensure I'd end up with an average rating with a single decimal place.
So, what does this temporary extension now produce?
3.8
Lovely!
I can now remove that extension and inject the two functions to the expression I'm sending in the second parameter for the call to arrange
, like this:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
def numbers: (map(select(length > 0)|tonumber));
def average: (add / length) * 10 | floor / 10;
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score)|numbers|average)
This produces what I was hoping for, a nice list of objects, one per category, with that category's average rating. Here's the first and last couple in that list (for brevity):
[
{
"key": "Altbier",
"value": 3.5
},
{
"key": "Barleywine",
"value": 4.4
},
{
"key": "Belgian Blonde",
"value": 3.7
},
{
"key": "Belgian Dubbel",
"value": 3.9
}
]
The nice thing about this sort of data structure is that it lends itself to further processing. In this case, I want to sort the categories by rating, in descending order.
I can achieve this with a call to sort_by
, and then a call to reverse
to swap the order.
While I'm at it, I'll also adopt a common programming approach of putting the main logic control in a main
function and then calling that at the bottom of the script. It reminds me a lot of the Python style:
if __name__ == "__main__":
...
So, here goes:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
def numbers: (map(select(length > 0)|tonumber));
def average: (add / length) * 10 | floor / 10;
def main:
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score)|numbers|average)
| sort_by(.value)
| reverse;
main
This produces an array of categories, ordered by their average rating. Here are the first and last two in that list:
[
{
"key": "Rauchbier",
"value": 5
},
{
"key": "Freeze-Distilled Beer",
"value": 5
},
{
"key": "MƤrzen",
"value": 2.9
},
{
"key": "Pilsner",
"value": 2.7
}
]
That's nice, but I will go one stage further and take advantage of the key/value
pattern, using from_entries
to condense that:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
def numbers: (map(select(length > 0)|tonumber));
def average: (add / length) * 10 | floor / 10;
def main:
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score)|numbers|average)
| sort_by(.value)
| reverse
| from_entries;
main
This produces a neat list, like this:
{
"Rauchbier": 5,
"Freeze-Distilled Beer": 5,
"Chilli / Chile Beer": 5,
"Black & Tan": 4.5,
"Belgian Quadrupel": 4.5,
"Barleywine": 4.4,
"Wild Ale": 4.3,
"Specialty Grain": 4.3,
"Old Ale": 4.3,
"BiĆØre de Champagne / BiĆØre Brut": 4.3,
"Strong Ale": 4.2,
"Sour": 4.2,
"Stout": 4.1,
"Rye Wine": 4.1,
"IPA": 4.1,
"Belgian Tripel": 4.1,
"Winter Ale": 4,
"Smoked Beer": 4,
"Scotch Ale / Wee Heavy": 4,
"Red Ale": 4,
"Lambic": 4,
"Historical Beer": 4,
"Grape Ale": 4,
"Brown Ale": 4,
"Brett Beer": 4,
"Belgian Strong Dark Ale": 4,
"Traditional Ale": 3.9,
"Rye Beer": 3.9,
"Porter": 3.9,
"Pale Ale": 3.9,
"Mild": 3.9,
"Farmhouse Ale": 3.9,
"California Common": 3.9,
"Belgian Dubbel": 3.9,
"Winter Warmer": 3.8,
"Spiced / Herbed Beer": 3.8,
"Schwarzbier": 3.8,
"Belgian Strong Golden Ale": 3.8,
"Gluten-Free": 3.7,
"Fruit Beer": 3.7,
"Bock": 3.7,
"Bitter": 3.7,
"Belgian Blonde": 3.7,
"Scottish Export Ale": 3.6,
"Roggenbier": 3.6,
"Dark Ale": 3.6,
"Wheat Beer": 3.5,
"Table Beer": 3.5,
"Mead": 3.5,
"Kellerbier / Zwickelbier": 3.5,
"Honey Beer": 3.5,
"Cream Ale": 3.5,
"Cider": 3.5,
"Altbier": 3.5,
"Blonde Ale": 3.4,
"Scottish Ale": 3.3,
"Kƶlsch": 3.3,
"Golden Ale": 3.3,
"Lager": 3.1,
"Shandy / Radler": 3,
"MƤrzen": 2.9,
"Pilsner": 2.7
}
That's very satisfying!
Well I think I'm there, basically. But something bothers me. I know my favourite style is more towards the India Pale Ale (IPA) variety, but ranking well above that style (both IPAs and Imperial IPAs) are some rarer categories, such as Rauchbier and Freeze-Distilled Beer. Why is that? That's what I'll investigate in part 4.
]]>Part 1 finished with a count and list of categories of beer (IPA, Bock, Belgian Tripel, etc), produced from some jq
in untappd.jq
that looks like this:
def category: split(" -") | first;
map(.beer_type|category) | unique | length, .
The output looks like this (reduced here):
62
[
"Altbier",
"Barleywine",
"Belgian Blonde",
"...",
"Winter Ale",
"Winter Warmer"
]
So now it's time to pick out the data I need for the analysis, and that is, for each checkin, the beer's category, and my rating. I'll start by just mapping the array of checkin objects to an array of smaller objects just containing these two things:
def category: split(" -") | first;
map({ category: .beer_type|category, rating_score })
When using the object construction mechanism, I can just specify the name of an existing property, in this case
rating_score
, which is shorthand for"rating_score": .rating_score
.
This produces an array of pairs of values which parallel the simple chronological list of checkins (output reduced for brevity):
[
{
"category": "Brown Ale",
"rating_score": "5"
},
{
"category": "Pale Ale",
"rating_score": "5"
},
{
"category": "Bitter",
"rating_score": "3"
},
{
"category": "Bitter",
"rating_score": ""
},
{
"category": "Belgian Tripel",
"rating_score": "4.7"
},
{
"category": "...",
"rating_score": "..."
}
]
Notice the checkin to a Bitter where I had not specified a rating. While we're at it, notice that the ratings are all strings, even though the values are numeric. We'll deal with those two aspects, but not just yet.
In order to be able to have a chance of calculating the average rating per category, I need first to group the data by category. So that's next:
def category: split(" -") | first;
map({ category: .beer_type|category, rating_score })
| group_by(.category)
Here's what that produces (again, massively reduced for brevity):
[
[
{
"category": "Altbier",
"rating_score": "4"
},
{
"category": "Altbier",
"rating_score": "3"
},
{
"category": "Altbier",
"rating_score": "3.75"
}
],
[
{
"category": "Barleywine",
"rating_score": "4"
},
[
{
"category": "Belgian Quadrupel",
"rating_score": "4.9"
},
{
"category": "Belgian Quadrupel",
"rating_score": "5"
}
]
]
This seems familiar. In the "Arranging by brewery country and count" section of Untappd data with jq - my top brewery countries I had a similar requirement, and following the call to group_by
I mapped over each subarray creating small objects consisting of a key
property having the value of the subarray's first entry's brewery_country
and a value
property having the length of the subarray. This is the code I had:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
| map({key: first.brewery_country, value: length})
'
I'm at a similar position here now too. I have a number of subarrays, each one representing a beer category, and containing one object per checkin. I want to turn those subarrays into something that makes more sense from an average rating per category point of view. And to get there would need something very similar to this group_by ... map
approach. Let's have a look:
def category: split(" -") | first;
map({ category: .beer_type|category, rating_score })
| group_by(.category)
| map({key: first.category, value: map(.rating_score)})
This creates the following type of output:
[
{
"key": "Altbier",
"value": [
"4",
"3",
"3.75",
"3.5",
"3.25"
]
},
{
"key": "...",
"value": [
"...",
"..."
]
},
{
"key": "Winter Warmer",
"value": [
"",
"",
"4",
"4",
"4",
"3.5",
"4",
"4.25",
"3.25",
"4.25",
"3.75",
"3.4"
]
}
]
OK, getting there! But before we move on it feels right to encapsulate this pattern into a function. I'll do that now:
def category: split(" -") | first;
def arrange(k;v):
group_by(.[k])
| map({key: (first|.[k]), value: v});
map({ category: .beer_type|category, rating_score })
| arrange("category"; map(.rating_score))
This new function arrange
(naming things is hard) performs the group_by ... map
. It takes two parameters (in jq
parlance it would be written as arrange/2
):
k
is what the grouping property should bev
is what the value of the value
property should be in the resulting objectsTo use an indirect value (whatever is in
k
) like this in a property reference, we have to use this syntax:.[k]
rather than.k
of course).
So in the call to arrange
, the first parameter I'm passing is the string "category"
, which is the name of the property by which I want the objects to be grouped, and also which is the name of the property that I use to get the value for the key
(first|.[k]
) in each object I'm producing in the call to map
.
And the second parameter I'm passing is the expression map(.rating_score)
which when evaluated produces an array of values from the rating_score
property in each checkin.
Well, that seems like a good place to end this part. In part 3 I'll deal with those pesky null rating values, and also with the fact that all the ratings are strings rather than numbers. And then calculate an average.
]]>I'm not a great fan of slides, but am not against them either. I use them sometimes, and on the occasions when I do, each slide will be simple, perhaps with a picture or diagram, or with a few key words.
Sometimes, only when absolutely necessary, some slides will have more detail on them.
A slide deck is not the talk content. A slide deck is there to aid the talk, to enhance it, to provide a bit of context (or light relief) for those attending. They're there to support what's being said, to underpin the message.
That's why, sometimes, I don't use slides at all. I just show stuff on my computer, fumble around and wave my arms about wildly. Anything to get the point across, to help explain what I'm trying to say, to be more effective in landing the concepts that I'm attempting to convey.
I often am modifying (I was going to say "improving" but that is up for debate) the content of my talk right up to the day, the hour of when I'm going to give it. It's all about being as up to date as possible, and maintaining the balance between spontaneity and the solid core of a story. Naturally, I'll adjust any supporting slides as I make such modifications.
So asking for slides in advance is entirely inappropriate. It feels like being asked to submit a speech in written form, verbatim and immutable.
It's an anti-pattern. In these days of, you know, the Internet and the Web, it's not even necessary. We all have the wherewithall to host content and point to it. The technologies required have existed since the early 1990s, at least. In other words, stop trying to gather slide decks as if they still existed on transparent foils that were presented on overhead projectors and then photocopied and distributed via mail after the event.
Picture courtesy of Wikimedia Commons
This anti-pattern reminds me of another pre-Web process still extant in today's age of the fundamental interconnectedness of all things. I wrote about this, the depressing sight of the requirement to upload one's CV (rƩsumƩ) to a server on LinkedIn.
To make matters worse, the only filetypes allowed are Word and PDF. Seriously? (See Monday morning thoughts: rethinking like the web for more details on this).
Anyway, this is 2022. I wish event organisers would notice that and stop asking for slides in advance. Why do they do it? I suspect it's because it's just how they've always done it, and have not been told otherwise, and haven't really thought about what a poor process it is with respect to their speakers. So perhaps this blog post will help.
And in the same way that the Word and PDF filetype restriction makes a bad situation even worse in the LinkedIn CV upload anti-pattern, asking your speakers to use a specific PowerPoint template is making a big assumption and also turning a bad situation worse. What if your speaker doesn't have or use PowerPoint? Do you have a template for another slides tool? What about Apple's Keynote? Google Slides? And I know this is niche, but what about terminal-based slide presentation software? It's what I use these days. Are you going to provide your branding on templates for all these tools?
So, dear event organisers, I exhort you. Please stop asking for slides in advance, treat your speakers like grown-ups and respect their content creation process. Thank you.
]]>Near the start of the previous post Untappd data with jq - my top brewery countries there's an example of a checkin object; here are some of the properties:
{
"beer_name": "Leffe Brune / Bruin",
"brewery_name": "Abbaye de Leffe",
"beer_type": "Brown Ale - Belgian",
"rating_score": "5"
}
I wanted to know how (if at all) my rating was affected by my particular preferences on beer types. I started looking at what types existed in my checkin data. First, how many are we talking about here?
< checkins.json jq '
map(.beer_type) | unique | length
'
Wow, there are quite a few:
177
Let's have a look at the first 20:
< checkins.json jq '
map(.beer_type) | unique[:20]
'
The unique function produces a sorted list as well as removing duplicates.
[
"Altbier",
"Barleywine - American",
"Barleywine - English",
"Barleywine - Other",
"Belgian Blonde",
"Belgian Dubbel",
"Belgian Quadrupel",
"Belgian Strong Dark Ale",
"Belgian Strong Golden Ale",
"Belgian Tripel",
"Bitter - Best",
"Bitter - Extra Special / Strong (ESB)",
"Bitter - Session / Ordinary",
"BiĆØre de Champagne / BiĆØre Brut",
"Black & Tan",
"Blonde Ale",
"Bock - Doppelbock",
"Bock - Eisbock",
"Bock - Hell / Maibock / Lentebock",
"Bock - Single / Traditional"
]
OK so there are quite a few.
I might be able to be a little less granular if I just take whatever comes before the dash, if there is one. That would, for example, group together all the Bock types, and, for another example, all the IPAs, of which there are quite a few:
< checkins.json jq '
map(.beer_type|select(startswith("IPA -"))) | unique
'
As you can see:
[
"IPA - American",
"IPA - Belgian",
"IPA - Black / Cascadian Dark Ale",
"IPA - Brett",
"IPA - Brut",
"IPA - Cold",
"IPA - English",
"IPA - Farmhouse",
"IPA - Imperial / Double",
"IPA - Imperial / Double Black",
"IPA - Imperial / Double Milkshake",
"IPA - Imperial / Double New England / Hazy",
"IPA - Milkshake",
"IPA - New England / Hazy",
"IPA - New Zealand",
"IPA - Other",
"IPA - Red",
"IPA - Rye",
"IPA - Session",
"IPA - Sour",
"IPA - Triple",
"IPA - Triple New England / Hazy",
"IPA - White / Wheat"
]
So if I call the part before any dash the "major" type, how many of those are there? Hopefully fewer than 177. Let's work it out:
< checkins.json jq '
map(.beer_type|split(" -")|first) | unique | length, .
'
Hold on though, what if a
beer_type
value doesn't have a dash? What will callingsplit(" -")
do here? Let's see:jq -n '
["Major - minor", "Some other type"] | map(split(" -")|first)
'(The
-n
option tellsjq
to usenull
as the single input value, effectively tellingjq
not to expect any JSON to be fed in).This gives:
[
"Major",
"Some other type"
]This is what we want to happen.
OK, let's run the filter, which gives:
62
[
"Altbier",
"Barleywine",
"Belgian Blonde",
"Belgian Dubbel",
"Belgian Quadrupel",
"Belgian Strong Dark Ale",
"Belgian Strong Golden Ale",
"Belgian Tripel",
"Bitter",
"BiĆØre de Champagne / BiĆØre Brut",
"Black & Tan",
"Blonde Ale",
"Bock",
"Brett Beer",
"Brown Ale",
"California Common",
"Chilli / Chile Beer",
"Cider",
"Cream Ale",
"Dark Ale",
"Farmhouse Ale",
"Freeze-Distilled Beer",
"Fruit Beer",
"Gluten-Free",
"Golden Ale",
"Grape Ale",
"Historical Beer",
"Honey Beer",
"IPA",
"Kellerbier / Zwickelbier",
"Kƶlsch",
"Lager",
"Lambic",
"Mead",
"Mild",
"MƤrzen",
"Old Ale",
"Pale Ale",
"Pilsner",
"Porter",
"Rauchbier",
"Red Ale",
"Roggenbier",
"Rye Beer",
"Rye Wine",
"Schwarzbier",
"Scotch Ale / Wee Heavy",
"Scottish Ale",
"Scottish Export Ale",
"Shandy / Radler",
"Smoked Beer",
"Sour",
"Specialty Grain",
"Spiced / Herbed Beer",
"Stout",
"Strong Ale",
"Table Beer",
"Traditional Ale",
"Wheat Beer",
"Wild Ale",
"Winter Ale",
"Winter Warmer"
]
Note there are two JSON values - a scalar (62) and an array. I wanted the count, as well as all the names of the major types, and that's what was produced, as you can see, from
length
and.
respectively. The interesting thing to note is that in the last part of the filter, bothlength
and.
were passed the output from the preceding expression (the output fromunique
); there was no need for any variable binding or explicit value passing.
That "major type" thing is something I'll likely use again, so it's worth considering creating a function for it.
From now on, I'll stop showing the entire command line invocation (passing the file contents to
jq
, and specifying the filter in single quotes, also on the command line) and show just thejq
expressions instead. That's mostly because I can then get it formatted a little nicer in these posts (with a bit of colour that's sensitive tojq
's syntax). I'll create a fileuntappd.jq
to hold thejq
expressions, so you just need to imagine that the invocations now look like this:< checkins.json jq -f untappd.jq
The function is very simple and just encapsulates what we've done already, which is then replaced with a call to that function:
def category: split(" -") | first;
map(.beer_type|category) | unique | length, .
I wasn't fond of the name "major_type" for the function, so I've come up with the name "category" instead.
That's the end of part 1. It looks like I have a manageable set of major beer types (categories) to use as a basis for this analysis. I've also got the feeling that the jq
that I'll end up writing might be more than a few lines' worth, so I'm glad I've made the switch away from a "one-liner" to a file based filter.
In part 2 I look at collecting my ratings across all the checkins, ready for averaging them by category.
]]>jq
, because it's a nice data set to practise my limited filtering fu upon, and also to get my blogging flowing again.
I'm an Untappd supporter and an early adopter, joining in early November 12 years ago in 2010. Recently Untappd celebrated 12 years of operation and 10 million users. It got me thinking back to my very first checkin (it was a Leffe Brune, in case you're wondering) and then I remembered that as an Untappd supporter I could get access to my entire checkin history, in JSON.
The JSON data is quite simple - it's a single file (I've called it checkins.json
) containing an array of checkin objects, where each object looks like this:
{
"beer_name": "Leffe Brune / Bruin",
"brewery_name": "Abbaye de Leffe",
"beer_type": "Brown Ale - Belgian",
"beer_abv": "6.5",
"beer_ibu": "20",
"comment": "Christening Untappd with this, a fav of mine.",
"venue_name": null,
"venue_city": null,
"venue_state": null,
"venue_country": null,
"venue_lat": null,
"venue_lng": null,
"rating_score": "5",
"created_at": "2010-11-08 18:52:02",
"checkin_url": "https://untappd.com/c/11215",
"beer_url": "https://untappd.com/beer/5941",
"brewery_url": "https://untappd.com/brewery/5",
"brewery_country": "Belgium",
"brewery_city": "Leuven",
"brewery_state": "Vlaanderen",
"flavor_profiles": "",
"purchase_venue": "",
"serving_type": "",
"checkin_id": "11215",
"bid": "5941",
"brewery_id": "5",
"photo_url": null,
"global_rating_score": 3.55,
"global_weighted_rating_score": 3.55,
"tagged_friends": "",
"total_toasts": "1",
"total_comments": "0"
}
(OK, I put my rating_score
of 5 down to excitement at a new beer rating app).
Noting that my first checkin was to a beer from Belgium (see the value for the brewery_country
property) I thought it would be a nice exercise to discover the top brewery countries for the beers I've checked in.
To keep the data compact for this blog post, I decided to analyse just the latest 20 checkins, rather than the entire four thousand plus. And for the purposes of experimentation and illustration, I only really need to see the beer name, brewery name and brewery country.
So I start my analysis like this:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
'
Note the use of a negative index on the array slice here - which causes the slice to start from counting backwards from the end of the array. Note also that I'm invoking
jq
and passing in the data in a slightly different way than I have done before (such as in Summing and grouping values with jq). Instead of specifying a filename (jq filter filename
) I'm using redirection to pass the contents of thefilename
tojq
's STDIN:< filename jq filter
.
This gives us a much smaller data set to think about, but which has enough variation to have the analysis also make sense:
[
{
"beer_name": "Kentucky Breakfast Stout (KBS)",
"brewery_name": "Founders Brewing Co.",
"brewery_country": "United States"
},
{
"beer_name": "Gueuze Tilquin ā Draft Version",
"brewery_name": "Gueuzerie Tilquin",
"brewery_country": "Belgium"
},
{
"beer_name": "Zwanze 2022 - Poivre De Gorilles",
"brewery_name": "Brasserie Cantillon",
"brewery_country": "Belgium"
},
{
"beer_name": "Moeder Imperiale",
"brewery_name": "La Source Beer Co.",
"brewery_country": "Belgium"
},
{
"beer_name": "Supersonic",
"brewery_name": "LERVIG",
"brewery_country": "Norway"
},
{
"beer_name": "Illuminati",
"brewery_name": "Leelanau Brewing Company",
"brewery_country": "United States"
},
{
"beer_name": "Out of Vogue",
"brewery_name": "Burning Sky Brewery",
"brewery_country": "England"
},
{
"beer_name": "SDIPA Strata",
"brewery_name": "Vault City Brewing",
"brewery_country": "Scotland"
},
{
"beer_name": "Petrus Dubbel",
"brewery_name": "Brouwerij De Brabandere",
"brewery_country": "Belgium"
},
{
"beer_name": "Outlaw",
"brewery_name": "Distant Hills",
"brewery_country": "England"
},
{
"beer_name": "North X Neon Raptor Imperial Stout + Cacao + Peanut + Banana",
"brewery_name": "North Brewing Co.",
"brewery_country": "England"
},
{
"beer_name": "Sweet Temptation",
"brewery_name": "Vocation Brewery",
"brewery_country": "England"
},
{
"beer_name": "Turns",
"brewery_name": "Siren Craft Brew",
"brewery_country": "England"
},
{
"beer_name": "Abt 12",
"brewery_name": "Brouwerij St.Bernardus",
"brewery_country": "Belgium"
},
{
"beer_name": "Interference Is Temporary",
"brewery_name": "Cloudwater Brew Co.",
"brewery_country": "England"
},
{
"beer_name": "Liquid Art",
"brewery_name": "Prizm Brewing Co.",
"brewery_country": "France"
},
{
"beer_name": "Have You Got Cask Or Is It All Craft?",
"brewery_name": "DEYA Brewing Company",
"brewery_country": "England"
},
{
"beer_name": "DIVINE FAITH // DIPA (2022)",
"brewery_name": "Northern Monk",
"brewery_country": "England"
},
{
"beer_name": "Silver King",
"brewery_name": "Ossett Brewery",
"brewery_country": "England"
},
{
"beer_name": "HEATHEN // HAZY IPA",
"brewery_name": "Northern Monk",
"brewery_country": "England"
}
]
Well the first thing I want to do is arrange the checkin objects by brewery country. Using group_by produces a set of subarrays, like this:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
'
This results in a set of subarrays, one for each brewery country, as we'd expect. Note the new [ [ ... ], [ ... ], ... ]
structure:
[
[
{
"beer_name": "Gueuze Tilquin ā Draft Version",
"brewery_name": "Gueuzerie Tilquin",
"brewery_country": "Belgium"
},
{
"beer_name": "Zwanze 2022 - Poivre De Gorilles",
"brewery_name": "Brasserie Cantillon",
"brewery_country": "Belgium"
},
{
"beer_name": "Moeder Imperiale",
"brewery_name": "La Source Beer Co.",
"brewery_country": "Belgium"
},
{
"beer_name": "Petrus Dubbel",
"brewery_name": "Brouwerij De Brabandere",
"brewery_country": "Belgium"
},
{
"beer_name": "Abt 12",
"brewery_name": "Brouwerij St.Bernardus",
"brewery_country": "Belgium"
}
],
[
{
"beer_name": "Out of Vogue",
"brewery_name": "Burning Sky Brewery",
"brewery_country": "England"
},
{
"beer_name": "Outlaw",
"brewery_name": "Distant Hills",
"brewery_country": "England"
},
{
"beer_name": "North X Neon Raptor Imperial Stout + Cacao + Peanut + Banana",
"brewery_name": "North Brewing Co.",
"brewery_country": "England"
},
{
"beer_name": "Sweet Temptation",
"brewery_name": "Vocation Brewery",
"brewery_country": "England"
},
{
"beer_name": "Turns",
"brewery_name": "Siren Craft Brew",
"brewery_country": "England"
},
{
"beer_name": "Interference Is Temporary",
"brewery_name": "Cloudwater Brew Co.",
"brewery_country": "England"
},
{
"beer_name": "Have You Got Cask Or Is It All Craft?",
"brewery_name": "DEYA Brewing Company",
"brewery_country": "England"
},
{
"beer_name": "DIVINE FAITH // DIPA (2022)",
"brewery_name": "Northern Monk",
"brewery_country": "England"
},
{
"beer_name": "Silver King",
"brewery_name": "Ossett Brewery",
"brewery_country": "England"
},
{
"beer_name": "HEATHEN // HAZY IPA",
"brewery_name": "Northern Monk",
"brewery_country": "England"
}
],
[
{
"beer_name": "Liquid Art",
"brewery_name": "Prizm Brewing Co.",
"brewery_country": "France"
}
],
[
{
"beer_name": "Supersonic",
"brewery_name": "LERVIG",
"brewery_country": "Norway"
}
],
[
{
"beer_name": "SDIPA Strata",
"brewery_name": "Vault City Brewing",
"brewery_country": "Scotland"
}
],
[
{
"beer_name": "Kentucky Breakfast Stout (KBS)",
"brewery_name": "Founders Brewing Co.",
"brewery_country": "United States"
},
{
"beer_name": "Illuminati",
"brewery_name": "Leelanau Brewing Company",
"brewery_country": "United States"
}
]
]
Each of the subarrays has a length equal to the count of checkins for that country, clearly. So I can use this and gather the data into a key/value structure that I can then use further down the line with the entries family of functions.
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
| map({key: first.brewery_country, value: length})
'
This has the effect of turning the subarrays into objects:
[
{
"key": "Belgium",
"value": 5
},
{
"key": "England",
"value": 10
},
{
"key": "France",
"value": 1
},
{
"key": "Norway",
"value": 1
},
{
"key": "Scotland",
"value": 1
},
{
"key": "United States",
"value": 2
}
]
I could have mapped the subarrays slightly differently, like this:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
| map({(first.brewery_country): length})
'
which would have produced an arguably neater result:
[
{
"Belgium": 5
},
{
"England": 10
},
{
"France": 1
},
{
"Norway": 1
},
{
"Scotland": 1
},
{
"United States": 2
}
]
The problem with this result is that it's now harder to sort by the count, because there's no stable property to refer to for sorting. So we'll stick with the use of key
and value
properties.
It's now time to sort, and I want the most popular brewery country at the top, so I'll also need to reverse the sorted output:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
| map({key: first.brewery_country, value: length})
| sort_by(.value)
| reverse
'
This produces what we're expecting:
[
{
"key": "England",
"value": 10
},
{
"key": "Belgium",
"value": 5
},
{
"key": "United States",
"value": 2
},
{
"key": "Scotland",
"value": 1
},
{
"key": "Norway",
"value": 1
},
{
"key": "France",
"value": 1
}
]
Now I have the core data computed and organised as required, I can neaten it up using the from_entries
function, which expects key
and value
property names:
< checkins.json jq '
.[-20:]
| map({beer_name, brewery_name, brewery_country})
| group_by(.brewery_country)
| map({key: first.brewery_country, value: length})
| sort_by(.value)
| reverse
| from_entries
'
And I get an even better version of what I almost went for when I was first arranging by brewery country and count:
{
"England": 10,
"Belgium": 5,
"United States": 2,
"Scotland": 1,
"Norway": 1,
"France": 1
}
That'll do nicely.
Now I'm happy with the result, I can remove the first two parts of the filter (which were there just for a quick experiment) so that the results reflect my entire history checkin:
< checkins.json jq '
group_by(.brewery_country)
| map({key: first.brewery_country, value: length})
| sort_by(.value)
| reverse
| from_entries
'
This gives me the following result:
{
"England": 2518,
"United States": 590,
"Belgium": 497,
"Scotland": 157,
"Netherlands": 123,
"Denmark": 97,
"Germany": 79,
"Wales": 70,
"Spain": 69,
"Norway": 40,
"Ireland": 29,
"Sweden": 25,
"Italy": 20,
"France": 19,
"Australia": 17,
"Estonia": 16,
"Poland": 13,
"New Zealand": 13,
"Latvia": 10,
"Japan": 10,
"United Kingdom": 6,
"Northern Ireland": 5,
"India": 5,
"Austria": 5,
"Iceland": 3,
"Greece": 3,
"Croatia": 3,
"Canada": 3,
"Turkey": 2,
"Switzerland": 2,
"South Africa": 2,
"Portugal": 2,
"Lithuania": 2,
"Hungary": 2,
"Channel Islands": 2,
"Romania": 1,
"Malta": 1,
"Hong Kong": 1,
"Finland": 1,
"Czech Republic": 1
}
Given my beer tastes and my location, I don't think that's a surprising result. But nice to have it confirmed. Cheers and happy 12th birthday Untappd! š»
]]>JOIN
and INDEX
, based on an answer to a question that I came across on Stack Overflow.
The answer was in response to a question (JQ: How to join arrays by key?) about how to merge two arrays of related information. I found it interesting and it also introduced me to a couple of operators in jq
that I'd hitherto not come across. There's a section in the manual titled SQL-Style Operators that describe them.
I could have sworn I'd never seen this section before, so had instead looked to see if they were defined in the builtin.jq file, where jq
functions, filters and operators are defined ... in jq
. I did come across them there, and their definitions helped me understand them too. I thought I'd explore them in this blog post, "out loud", as it were.
Throughout this post I'm going to use the data described in the Stack Overflow question, which (after a bit of tidying up) looks like this (and which I've put into a file called data.json
):
{
"weights": [
{
"name": "apple",
"weight": 200
},
{
"name": "tomato",
"weight": 100
}
],
"categories": [
{
"name": "apple",
"category": "fruit"
},
{
"name": "tomato",
"category": "vegetable"
}
]
}
I want to start by staring at the definitions of the two operators in builtin.jq
. Here's the section of code, with a few empty lines added for readability:
def INDEX(stream; idx_expr):
reduce stream as $row ({}; .[$row|idx_expr|tostring] = $row);
def INDEX(idx_expr): INDEX(.[]; idx_expr);
def JOIN($idx; idx_expr):
[.[] | [., $idx[idx_expr]]];
def JOIN($idx; stream; idx_expr):
stream | [., $idx[idx_expr]];
def JOIN($idx; stream; idx_expr; join_expr):
stream | [., $idx[idx_expr]] | join_expr;
The first thing I see is that there are multiple definitions of both INDEX
and JOIN
, each with different numbers of parameters. In various discussions, I've seen this represented in the way folks refer to them. For example, there are three definitions of JOIN
, one with two parameters (JOIN($idx; idx_expr)
), one with three (JOIN($idx; stream; idx_expr)
) and one with four (JOIN($idx; stream; idx_expr; join_expr)
). These are referred to, respectively, like this: JOIN/2
, JOIN/3
and JOIN/4
, where the number represents the arity.
So I set off on my exploration, looking at the two definitions of INDEX
.
Starting with INDEX/2
, I see:
def INDEX(stream; idx_expr):
reduce stream as $row ({}; .[$row|idx_expr|tostring] = $row);
Earlier this year I managed to get to grips with the reduce
function in jq
, and wrote about it in this post: Understanding jq's reduce function. With that understanding, the call to reduce
here doesn't seem as impenetrable. Here's how I understand it, in pseudo-JS:
stream.reduce((accumulator, row) => {
accumulator[<result of determining the idx_expr>] = row
return accumulator
}, {})
In other words, this reduce
invocation iterates through the elements of stream
, and for each one, represented by $row
each time, adds a new entry to an object, which is empty to start with ({}
), where the value of the entry is the element itself ($row
) and the key is determined by applying the idx_expr
expression to the row, and then stringifying the result (tostring
).
What happens if I invoke such an INDEX
operator on the test data above? How about:
jq 'INDEX(.categories[]; .name)' data.json
First, .categories[]
is used as the "stream", i.e. a stream of values - in this case the values are objects each with name
and category
keys, like this:
{
"name": "apple",
"category": "fruit"
},
{
"name": "tomato",
"category": "vegetable"
}
In this invocation of INDEX
, .name
is set as the idx_expr
(the expression for determining what the index, or key, is going to be). So for the first element (the object with the "apple" details) the idx_expr
of .name
gives "apple".
The result of invoking INDEX(.categories[], .name)
then, is this:
{
"apple": {
"name": "apple",
"category": "fruit"
},
"tomato": {
"name": "tomato",
"category": "vegetable"
}
}
So this has turned a stream of objects into a single object with keys ("indices", I guess) built from values in the original objects.
And INDEX/1
is just a call to INDEX/2
with the first parameter set to .[]
:
def INDEX(idx_expr): INDEX(.[]; idx_expr);
This feels like a nice convenience redefinition, and I get the feeling that this version might see more use in a pipeline context. Looking at how it might be used, with the same data, I get this:
jq '.categories | INDEX(.name)' data.json
This produces the same thing:
{
"apple": {
"name": "apple",
"category": "fruit"
},
"tomato": {
"name": "tomato",
"category": "vegetable"
}
}
Implicitly what's piped into INDEX/1
has to be an array, I guess, which is why the left hand side of the pipe here is .categories
, rather than .categories[]
which would have caused multiple invocations of INDEX
, one for every array element. Moreover, .categories
is more appropriate because the first thing that INDEX/1
does is invoke the array iterator on it (i.e. the .[]
in INDEX(.[]; idx_expr)
).
So in summary, INDEX
can be used to create a "lookup" object where the keys are determined based on what you specify to pick out of the incoming stream.
Now I can turn my attention to each of the definitions of JOIN
, and there are three: JOIN/2
, JOIN/3
and JOIN/4
:
def JOIN($idx; idx_expr):
[.[] | [., $idx[idx_expr]]];
def JOIN($idx; stream; idx_expr):
stream | [., $idx[idx_expr]];
def JOIN($idx; stream; idx_expr; join_expr):
stream | [., $idx[idx_expr]] | join_expr;
I'll start by examining JOIN/4
.
As the arity identification suggests, this version takes four parameters. Even though the definition is just above, it's worth repeating it here, to be able to stare at for a minute or two:
def JOIN($idx; stream; idx_expr; join_expr):
stream | [., $idx[idx_expr]] | join_expr;
It's described in the manual thus (emphasis mine, to refer to the parameters):
This builtin joins the values from the given stream to the given index. The index's keys are computed by applying the given index expression to each value from the given stream. An array of the value in the stream and the corresponding value from the index is fed to the given join expression to produce each result.
Here's an example call that I'll run shortly:
INDEX(.categories[]; .name) as $categories
| JOIN($categories; .weights[]; .name; add)
Examining each of the parameters in turn, in the context of this description, we have the following values provided for the following parameters:
$idx <-- $categories
("the given index")
An example of such an index is what's produced by the INDEX
builtin we looked at earlier:
{
"apple": {
"name": "apple",
"category": "fruit"
},
"tomato": {
"name": "tomato",
"category": "vegetable"
}
}
In the example call, this is referred to via $categories
which is a symbolic binding to the result of INDEX(.categories[]; .name)
.
stream <-- .weights[]
(the "given stream")
This is a sequence, usually of objects. In the example call, I'm using .weights[]
as the stream, i.e. the objects describing food and their respective weights:
{
"name": "apple",
"weight": 200
},
{
"name": "tomato",
"weight": 100
}
idx_expr <-- .name
(the "given index expression")
This is effectively what to use, in the stream, to look up the corresponding data in the index. In this case, .name
is appropriate, as it has the values which are used as keys ("apple" and "tomato") in the index.
join_expr <-- add
(the "given join expression")
In order to understand why this parameter exists, it's necessary to have in mind what would be produced before such a join. For every stream object, after a successful lookup of a corresponding object in the index based on the index expression that points to a value in that stream object, what's produced is an array of two objects.
Here's an example, again based on the call I'll make, which is:
INDEX(.categories[]; .name) as $categories
| JOIN($categories; .weights[]; .name; add)
The first object in the .weights[]
stream is:
{
"name": "apple",
"weight": 200
}
From this, the value of .name
is "apple", and this is used to look for an entry in the $categories
index, and one is found - this one:
"apple": {
"name": "apple",
"category": "fruit"
}
Note that what's returned is not that entire structure, but just the value that "apple" points to:
{
"name": "apple",
"category": "fruit"
}
What happens now is described in the last sentence from the manual description:
An array of the value in the stream and the corresponding value from the index is fed to the given join expression to produce each result.
So looking at the first part of that sentence, this is what's produced - an array like this:
[
{
"name": "apple",
"weight": 200
},
{
"name": "apple",
"category": "fruit"
}
]
With the knowledge of what's produced before the join_expr
is employed, it is now clearer as to why such a join expression exists as a parameter.
In this example case it makes most sense to merge the two objects, and the multi-faceted add filter is perfect for this, producing - from that array of two objects - this single object:
{
"name": "apple",
"weight": 200,
"category": "fruit"
}
Joining the pairs of objects in one way or another is very likely to be what one desires.
Following on from JOIN/4
, it's easier to examine the other arity versions, starting with this one:
def JOIN($idx; stream; idx_expr):
stream | [., $idx[idx_expr]];
From the definition, the difference to JOIN/4
is just that the join_expr
parameter is omitted, and there's no pipe into such an expression at the end. The equivalent of this in a JOIN/4
context would be to specify .
as the join_expr
. I guess it's nicer not have to specify that if you don't want any special joining of the pairs of elements in your result.
This is an even more cut down version, in that not only is there no join_expr
but there's also no explicit parameter for specifying the stream. Instead, one is expected to pipe that into such a call to JOIN/2
. This is what the definition looks like, and one can see the .[]
at the start which unwraps an assumed array to produce a stream that is then piped into the main definition:
def JOIN($idx; idx_expr):
[.[] | [., $idx[idx_expr]]];
Note also in this arity version that the results are returned within an outer array via the [...]
array construction that wraps the entire definition.
Now I've examined the different definitions, it's time to finish off with putting the new knowledge to work, to better understand the answer given, which centres around this jq
expression and operates on the same test data I described earlier:
{weights: [JOIN(INDEX(.categories[]; .name); .weights[]; .name; add)]}
With some whitespace, that expression looks like this, which might help us read it more easily and see that there's nothing we don't now know about:
{
weights: [
JOIN(
INDEX( .categories[]; .name );
.weights[];
.name;
add
)
]
}
The expression contains a call to JOIN/4
, and the first parameter (the "index") is actually a call to INDEX/2
, which, as we know, given the test data, produces this:
{
"apple": {
"name": "apple",
"category": "fruit"
},
"tomato": {
"name": "tomato",
"category": "vegetable"
}
}
Then there's .weights[]
specified for the "stream", from which values for the "index expression" .name
are used to look up data in the index. Finally, add
is specified as what to use as the "join expression". What's produced is a stream of values which are then enclosed in an array ([...]
). This array is then returned as the value of a weights
property inside an object that's constructed just to hold that property.
This entire jq
filter, when applied to the test data, produces the following:
{
"weights": [
{
"name": "apple",
"weight": 200,
"category": "fruit"
},
{
"name": "tomato",
"weight": 100,
"category": "vegetable"
}
]
}
Again, I've probably used too many words in my exploration, but perhaps it will help you in your understanding as you explore this area too.
This exploration was inspired by the great answer by Stack Overflow user pmf.
If you're looking for another angle, and another example, there's another great answer from the same user, to a related question Understanding jq JOIN().
]]>In doing some research for an upcoming live stream I was looking at the Northwind OData v4 service and in particular at the Summary_of_Sales_by_Years entity set. It is not what I initially expected; rather than be a summary of sales by year, it was a list of orders each with a shipping date, order ID and order total. There are over 800 entries, and I grabbed all of them and stored them in a single JSON file Summary_of_Sales_by_Years.json
using a Bash shell script slurp that auto-follows the @odata.nextLink annotation trail on each chunk response.
I wanted to group the list by year and get grand totals for each year. This blog post describes how I went about it, and also describes a sort of preparation stage too where I created an initially much smaller dataset to experiment with.
I've created snippets on jqplay for each of the stages here - you'll see the links at the relevant points in this post.
For the sake of brevity in this post, I cut the data down to just 6 entries, two for each of the years represented (1996, 1997 and 1998). I did this with jq
too, redirecting the output into a new file subset.json
, thus:
jq \
'.value |= (
group_by(.ShippedDate[:4])
| map(.[:2])
| flatten
)' \
Summary_of_Sales_by_Years.json \
> subset.json
This resulted in the following content in subset.json
, which I can now use to more easily illustrate the summing and grouping.
{
"value": [
{
"ShippedDate": "1996-07-16T00:00:00Z",
"OrderID": 10248,
"Subtotal": 440
},
{
"ShippedDate": "1996-07-10T00:00:00Z",
"OrderID": 10249,
"Subtotal": 1863.4
},
{
"ShippedDate": "1997-01-16T00:00:00Z",
"OrderID": 10380,
"Subtotal": 1313.82
},
{
"ShippedDate": "1997-01-01T00:00:00Z",
"OrderID": 10392,
"Subtotal": 1440
},
{
"ShippedDate": "1998-01-02T00:00:00Z",
"OrderID": 10771,
"Subtotal": 344
},
{
"ShippedDate": "1998-01-21T00:00:00Z",
"OrderID": 10777,
"Subtotal": 224
}
]
}
Before we move on, let's briefly examine the jq
used to produce this.
Here's that jq
program again:
.value |= (
group_by(.ShippedDate[:4])
| map(.[:2])
| flatten
)
First, there's this construct: .value |= (...)
. The |=
is the update assignment operator and whatever the filter on the right hand side produces becomes the new value for the value
property. The parentheses in this particular instance ensure that the output from the entire expression within is used. It's needed here because the expression contains pipes (|
) which would otherwise short circuit.
With group_by(.ShippedDate[:4])
the group_by function collects objects by the ShippedDate
property - but not the entire property value, just the first four characters, which represent the year, for example "1996" in "1996-07-16T00:00:00Z" (there's the strptime
function too, which will parse a date into its component parts, but knowledge of the data and laziness won through here). Note that the [:4]
construct (which is short for [0:4]
) is the array/string slice filter operating on a string value in this case, which will return a substring.
The use of group_by
produces an array of arrays, with one subarray for each year.
This is then piped into map(.[:2])
. The [:2]
(again, short for [0:2]
) is the array/string slice filter again, but this time, it's operating on an array rather than a string. I'm using map
to run the filter .[:2]
against each element of the input array, which contains a subarray for each of the years. And the .[:2]
filter, in an array context, will return the first two elements.
The result of this is still an array of arrays, but now each subarray has only two objects each. Now they can all be merged, i.e. taken out of their respective subarrays and collected together. This is done with the flatten filter.
ā¶ You can see how this jq
program reduces the input data to the subset in this jqplay snippet: Initial input data reduction.
So, (now based on the subset of data above), what I actually want is a summary of total order value for each year, something like this:
[
[
"1996",
2303
],
[
"1997",
2753
],
[
"1998",
568
]
]
It makes sense that the approach required will also employ the group_by function as we want total order values for each year, and we can determine the years in the same way as we've seen in the preparation stage, i.e. with the array/string slice filter ([:4]
).
Let's start to explore:
jq \
'.value
| group_by(.ShippedDate[:4])' \
subset.json
[
[
{
"ShippedDate": "1996-07-16T00:00:00Z",
"OrderID": 10248,
"Subtotal": 440
},
{
"ShippedDate": "1996-07-10T00:00:00Z",
"OrderID": 10249,
"Subtotal": 1863.4
}
],
[
{
"ShippedDate": "1997-01-16T00:00:00Z",
"OrderID": 10380,
"Subtotal": 1313.82
},
{
"ShippedDate": "1997-01-01T00:00:00Z",
"OrderID": 10392,
"Subtotal": 1440
}
],
[
{
"ShippedDate": "1998-01-02T00:00:00Z",
"OrderID": 10771,
"Subtotal": 344
},
{
"ShippedDate": "1998-01-21T00:00:00Z",
"OrderID": 10777,
"Subtotal": 224
}
]
]
This is a nice illustration of the array of arrays structure we talked about earlier. There's a subarray for the objects for each year.
Now the data is in the right "shape", it's time to focus on summing the Subtotal
values within each subarray.
jq \
'.value
| group_by(.ShippedDate[:4])
| map(map(.Subtotal))' \
subset.json
This produces the following:
[
[
440,
1863.4
],
[
1313.82,
1440
],
[
344,
224
]
]
Note the nested calls to map
, i.e. map(map(...)
. This is because the outer map
processes the outer array, and passes each element (each of which are also arrays - the by-year subarrays) to the function specified, which is also map
, which processes (in turn) each inner array, which contain the objects. The simple filter .Subtotal
will just return the value of the Subtotal
property, so we see a list of lists of subtotals, remembering that we've got two for each of the three years.
So we have an array of arrays of subtotal values. As a next step let's add these grouped subtotal values together, using add (which is a filter that operates on arrays). While we're at it, we'll use floor to hard round down to the nearest whole number:
jq \
'.value
| group_by(.ShippedDate[:4])
| map(map(.Subtotal) | add | floor)' \
subset.json
This produces the following:
[
2303,
2753,
568
]
Almost there - but it's not that useful without the year. To get the structure we want, which is an array of arrays each containing the year and total, we'll need to add the year, and enclose that, with the total, in an array:
jq \
'.value
| group_by(.ShippedDate[:4])
| map([
first.ShippedDate[:4],
(map(.Subtotal) | add | floor)
])' \
subset.json
We've expanded what's passed to the outer map
function to the following:
[
first.ShippedDate[:4],
(map(.Subtotal) | add | floor)
]
What's happening here is that we're using array construction ([...]
) to produce an array, with two elements, starting with the value of first.ShippedDate[:4]
.
Each of the expressions in this array construction receives an array (one of the year-specific subarrays), but for our first element we only want the value from one of the elements in the incoming array, so we can use the first function to do that. This is a lovely bit of syntactic sugar through a definition, along with definitions for its siblings last
and nth
, in builtin.jq:
def first: .[0];
def last: .[-1];
def nth($n): .[$n];
I'm tempted to want to define another function rest
thus:
def rest: .[1:length];
See The beauty of recursion and list machinery for why - in particular, a slight obsession about x:xs
, first and rest, head and tail, and so on.
But I digress.
The second element in the constructed array, i.e. (map(.Subtotal) | add | floor)
, is the same as before, except that it's now surrounded in parentheses to ensure the whole thing is evaluated in one go (specifically, so that it's only the map(.Subtotal)
that gets passed through those pipes to add
and floor
, and not anything else).
So this is where we've ended up:
jq \
'.value
| group_by(.ShippedDate[:4])
| map([
first.ShippedDate[:4],
(map(.Subtotal) | add | floor)
])' \
subset.json
Running this produces the desired result:
[
[
"1996",
2303
],
[
"1997",
2753
],
[
"1998",
568
]
]
Very nice!
ā¶ You can see how this result is achieved in this jqplay snippet: Producing the 'array' style final result.
As alternative way of representing the totals by year, and knowing that the year values are stable enough to be property names in objects, we could instead go for something like this:
{
"1996": 2303,
"1997": 2753,
"1998": 568
}
To get this, it's not much of a departure from what we previously ended up with. First, instead of using array construction ([...]
) we can use object construction. As expected, we need to specify the property name and value, in this form:
property: value
Let's make that change, noting that because the expression for the property (the key) is not "identifier-like", i.e. it's an expression to be evaluated, we need to enclose it in parentheses like this: (first.ShippedDate[:4])
. Here we go:
jq \
'.value
| group_by(.ShippedDate[:4])
| map({
(first.ShippedDate[:4]): map(.Subtotal)|add|floor
})' \
subset.json
This produces almost but not quite what we want:
[
{
"1996": 2303
},
{
"1997": 2753
},
{
"1998": 568
}
]
But that's OK, because the more you get the feel for how jq
behaves, the more you'll likely guess that there'll be a simple way to merge these objects. And there is - the versatile add filter. We've used add
already to sum up an array of numeric values (the subtotals) but "adding" an array of objects together merges them.
So let's pipe the output of the map({...})
into add:
jq \
'.value
| group_by(.ShippedDate[:4])
| map({
(first.ShippedDate[:4]): map(.Subtotal)|add|floor
}) | add' \
subset.json
This merges the three year:total pairs from the three objects into a single object, and thus gives us what we want:
{
"1996": 2303,
"1997": 2753,
"1998": 568
}
Lovely!
ā¶ You can see how this alternative result is achieved in this jqplay snippet: Producing the 'array' style final result.
This turned out (again) to be a slightly longer post than expected, but in writing it, and in manipulating the source data, I've learned more about jq
. So that's a result. I hope this helps you too.
For each of our tutorials in SAP's Tutorial Navigator, we have metadata in the frontmatter. Here's an example from the Learn About OData Fundamentals tutorial:
author_name: DJ Adams
author_profile: https://github.com/qmacro
title: Learn about OData Fundamentals
description: Discover OData's origins and learn about the fundamentals of OData by exploring a public OData service.
auto_validation: false
primary_tag: software-product>sap-business-technology-platform
tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ]
time: 15
I received a JSON file with updated valid tags, against which I could check the values for the primary_tag
and tags
properties. The tags were arranged like this (drastically reduced to save space here):
{
"level": [
{
"name": "Beginner",
"value": " tutorial>beginner"
},
{
"name": "Intermediate",
"value": " tutorial>intermediate"
},
{
"name": "Advanced",
"value": " tutorial>advanced"
}
],
"common": [
{
"name": "ABAP Connectivity",
"value": "topic>abap-connectivity"
},
{
"name": "ABAP Development",
"value": "programming-tool>abap-development"
},
{
"name": "ABAP Extensibility",
"value": "programming-tool>abap-extensibility"
},
{
"name": "Android",
"value": "operating-system>android"
},
{
"name": "Artificial Intelligence",
"value": "topic>artificial-intelligence"
},
{
"name": "Big Data",
"value": "topic>big-data"
}
]
}
I wanted to explore the tags by "category", the part before the >
symbol in the value
properties. In the above excerpt (in the common
object, which is where the main list of tags are), there are the following categories: topic
, programming-tool
and operating-system
.
First, I used split to separate out the categories and tags by splitting on the >
symbol in each of the values.
.common
| map(.value | split(">"))
This produces an array of arrays. The outer array is the result of running map
(which takes an array and produces an array) and the inner arrays are the result of running split
on each category>tag
pattern in the value
properties:
[
[
"topic",
"abap-connectivity"
],
[
"programming-tool",
"abap-development"
],
[
"programming-tool",
"abap-extensibility"
],
[
"operating-system",
"android"
],
[
"topic",
"artificial-intelligence"
],
[
"topic",
"big-data"
]
The categories are the first values in each of the inner arrays, so next is to group the inner arrays by those categories:
.common
| map(.value | split(">"))
| group_by(.[0])
The .[0]
supplied to group_by
specifies that it's the first element of each inner array that should be the basis of grouping (i.e. the categories topic
, programming-tool
, programming-tool
, etc).
This produces a differently shaped nesting of arrays, one for each of the categories:
[
[
[
"operating-system",
"android"
]
],
[
[
"programming-tool",
"abap-development"
],
[
"programming-tool",
"abap-extensibility"
]
],
[
[
"topic",
"abap-connectivity"
],
[
"topic",
"artificial-intelligence"
],
[
"topic",
"big-data"
]
]
]
Now comes the task to reform that essential structure into something a little less "noisy". Using the entries family of functions, this turned out to be quite straightforward. That said, I'll explain the intermediate steps I went through on the way.
As I wanted an object, with the keys being categories, and the values being arrays of tag strings, it felt right to reach for the to_entries
function:
.common
| map(.value | split(">"))
| group_by(.[0])
| to_entries
This produced the following:
[
{
"key": 0,
"value": [
[
"operating-system",
"android"
]
]
},
{
"key": 1,
"value": [
[
"programming-tool",
"abap-development"
],
[
"programming-tool",
"abap-extensibility"
]
]
},
{
"key": 2,
"value": [
[
"topic",
"abap-connectivity"
],
[
"topic",
"artificial-intelligence"
],
[
"topic",
"big-data"
]
]
}
]
That is sort of the direction I want to go, but there's some tidying up to do, to get cleaner values for key
and value
. So I reached for map
to do this:
.common
| map(.value | split(">"))
| group_by(.[0])
| to_entries
| map({key: .value[0][0], value: .value|map(.[1])})
The expression passed to map
is the object construction ({...}
), creating objects each with two properties, key
and value
. The reason for staying with these property names will become clear shortly.
The value for key
is expressed as .value[0][0]
, i.e. the first (zeroth) element of the inner array that is the first (zeroth) element of the array that is the value of the value
property.
In other words, given the last object in the above most recent intermediate results:
{
"key": 2,
"value": [
[
"topic",
"abap-connectivity"
],
[
"topic",
"artificial-intelligence"
],
[
"topic",
"big-data"
]
]
}
Then .value[0][0]
will return "topic"
(specifically, the first instance of that string in the above JSON).
Similarly, to build the value for the new value
property in the object being constructed, I used this expression: .value|map(.[1])
. The current value of the value
property is an array, so using map
on that will produce another array. Of what? Well, of these values: .[1]
.
In other words, the second (index 1) value in each of the sub arrays. Given this same last object example above, .value|map(.[1])
produces ["abap-connectivity", "artificial-intelligence", "big-data"]
.
Running this latest iteration with the map
function produces this:
[
{
"key": "operating-system",
"value": [
"android"
]
},
{
"key": "programming-tool",
"value": [
"abap-development",
"abap-extensibility"
]
},
{
"key": "topic",
"value": [
"abap-connectivity",
"artificial-intelligence",
"big-data"
]
}
]
Almost there!
According to the manual, the to_entries
and from_entries
"convert between an object and array of key-value pairs". In each case, the name for the key and value properties are key
and value
respectively. I had an inkling I would probably want to use from_entries
at some stage, and this is the reason why I kept the names of the properties earlier.
Let's have a look what passing the above structure into from_entries
produces:
.common
| map(.value | split(">"))
| group_by(.[0])
| to_entries
| map({key: .value[0][0], value: .value|map(.[1])})
| from_entries
It's this:
{
"operating-system": [
"android"
],
"programming-tool": [
"abap-development",
"abap-extensibility"
],
"topic": [
"abap-connectivity",
"artificial-intelligence",
"big-data"
]
}
That's very nice, and pretty much exactly what I want. A neat and low-noise representation of the category and tag structure.
It turns out that the pattern:
to_entries -> map(...) -> from_entries
is common enough to have a function expression all of its own, and it's with_entries
. As detailed in the entries section of the manual:
with_entries(foo)
is shorthand forto_entries | map(foo) | from_entries
In fact, we can see how it's defined (which is exactly as described in the manual) in builtin.jq, along with to_entries
and from_entries
.
While I've played around a little with the entries family, this is the first time I've used it for real. Going through the intermediate process of finding myself using map
actually has helped me reflect on with_entries
very well.
In the Back to basics: OData - the Open Data Protocol - Part 3 - System query options live stream last Friday we looked at OData's system query options.
There was a question at the end about whether it was possible to use the $filter
system query option at multiple levels, in an $expand
context. I wrote up the question, and a detailed answer (summary: yes) with an example here: Can $filter be applied at multiple levels in an expand?.
I thought this would be another good opportunity to practise a bit of jq
this Saturday late morning, so wondered what a jq
filter would look like, one that would produce the same result as in the answer's example (showing suppliers only from the UK, and only including their products that were low in stock).
The OData URL for this request looks like this:
http://localhost:4004/northwind-model/Suppliers
?$filter=Country eq 'UK'
&$expand=Products($filter=UnitsInStock le 15)
It turned out to be pretty simple. First, I grabbed the basic data:
curl \
'http://localhost:8000/northwind-model/Suppliers?$expand=Products' \
> data.json
Incidentally, here's another example of the power of OData, being able to fetch data from related resources, in the same single request (see Nonsense! Absolute nonsense! for a deliberately provocative take on how some folks are so attracted to shiny new things they ignore what is already there).
Then I loaded it into ijq, the lovely interactive frontend to jq
, and played around a bit.
Here's what I ended up with:
.value
| map(
select(.Country == "UK")
| .Products |= map(
select(.UnitsInStock <= 15)
)
)
Breaking this down, we have:
.value
gives me the entire array of objects in the dataset, each one of which represents a supplier with all their productsmap(...)
this outer map
takes the array of supplier and product data and produces a new array, having processed each array element (each supplier with their products) with the filter expression suppliedselect(.Country == "UK")
this is the equivalent of the $filter=Country eq UK
in the OData URL.Products |= map(...)
the result of the previous select
(i.e. each supplier that is in the UK) is then passed to this expression which uses the update assignment (|=
) to produce a modified version of the value of the Products
propertyselect(.UnitsInStock <= 15)
the value of the Products
property is an array, because the navigation property between the Suppliers
and Products
entity types is defined as a one-to-many. This means it's appropriate to use another select
filter to pick out specific elements (those with a value of 15 or less for the UnitsInStock
). This is the equivalent of the Products($filter=UnitsInStock le 15)
part of the OData URLOne thing to note here is that there's a single outer map
, the processing within which not only filters the suppliers, but subsequently filters the products of the (reduced number of) suppliers, in one pass.
Anyway, that's pretty much it for this note-to-self. I think it's time for an early afternoon beer at Browtons. Cheers.
]]>jq
. Here's a simple example.
With the Northwind OData v4 service there are Products and Suppliers. As an easy exercise I want to create a list of products by supplier.
With OData it's easy to follow the Products
navigation property in the Supplier
entity type (see the metadata for more info) to the Product
entity type.
With an OData QUERY operation it's straightforward, using the $expand
and $select
system query options (with extra whitespace for readability):
https://services.odata.org
/v4/northwind/northwind.svc/Suppliers
?$expand=Products($select=ProductName)
&$select=CompanyName
You can try out this OData request directly, and the JSON representation in the response looks something like this (shortened to the first two suppliers for brevity):
{
"@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Suppliers(CompanyName,Products,Products(ProductName))",
"value": [
{
"CompanyName": "Exotic Liquids",
"Products": [
{
"ProductName": "Chai"
},
{
"ProductName": "Chang"
},
{
"ProductName": "Aniseed Syrup"
}
]
},
{
"CompanyName": "New Orleans Cajun Delights",
"Products": [
{
"ProductName": "Chef Anton's Cajun Seasoning"
},
{
"ProductName": "Chef Anton's Gumbo Mix"
},
{
"ProductName": "Louisiana Fiery Hot Pepper Sauce"
},
{
"ProductName": "Louisiana Hot Spiced Okra"
}
]
}
]
}
How might we do this in jq
? Let's see. First, let's grab some JSON data. To make it a little more interesting (i.e. without going directly to the supplier grouping) we'll start with the Products
entityset and expand the Supplier
navigation property like this:
https://services.odata.org
/v4/northwind/northwind.svc/Products
?$expand=Supplier
(Note for this simple example, I won't bother trying to consume all of the products by following the @odata.nextLink
s).
This gives a nice structure that we can dig into with jq
. It turns out to be quite simple, especially in the context of the recent process I followed in Exploring JSON with interactive jq.
Being an OData v4 entityset, the data is in the top level value
property, so we start with that, grouping by the ID of each product's supplier:
.value
| group_by(.SupplierID)
Then all we need to do is to reshape the resulting array, via map
, to product an object for each supplier, with a list of products:
.value
| group_by(.SupplierID)
| map(
{
CompanyName: first.Supplier.CompanyName,
Products: [.[].ProductName]
}
)
Here's a screenshot of this jq
invocation in action, against the OData JSON representation retrieved with the URL above.
You can see the results of the jq
filter, producing what we want (reduced for brevity):
[
{
"CompanyName": "Exotic Liquids",
"Products": [
"Chai",
"Chang",
"Aniseed Syrup"
]
},
{
"CompanyName": "New Orleans Cajun Delights",
"Products": [
"Chef Anton's Cajun Seasoning",
"Chef Anton's Gumbo Mix"
]
},
{
"CompanyName": "Grandma Kelly's Homestead",
"Products": [
"Grandma's Boysenberry Spread",
"Uncle Bob's Organic Dried Pears",
"Northwoods Cranberry Sauce"
]
}
]
You can examine how this works yourself courtesy of jq play.
Note: If we wanted to create the same shape as the OData output, with each product name as a value for a ProductName
property, this would just need a small change:
.value
| group_by(.SupplierID)
| map(
{
CompanyName: first.Supplier.CompanyName,
Products: map({ProductName})
}
)
This works because of the shortcut syntax available for jq
's object construction ({...}
) which is simply to use the name of the property (and you don't even need quotes):
{ProductName}
is the same as:
{"ProductName": .ProductName}
]]>
There's a wrapper around jq
called ijq (short for "interactive jq") which is a bit like a REPL in that it affords immediate feedback. It's a lovely program, and I use it a lot.
Yesterday I shared a short video of an example of how it can be used to explore a JSON dataset and I thought I'd give that example a more permanent home here on the blog.
(There's an asciinema version of this too).
In practising a little jq
, I thought I'd use it to find out the most common city in the Customers and Suppliers by Cities entityset in the V4 Northwind service.
This is the invocation I ended up with:
.value
| group_by(.City)
| map([length, first.City])
| sort_by(.[0])
| reverse
| first[1]
Here's a brief breakdown of the invocation I ended up with:
.value
gives me the entire array of objects in the dataset, each one of which represents a customer or supplier in a citygroup_by(...)
collects array elements together that have the same path expression specified (in this case the City
property), producing an array of arraysmap(...)
is much like map
in other languages, in that it will apply the function or filter given to the input array, producing a new array[length, first.City]
uses the array constructor ([...]
) to produce an array of two elements, the first being the length of the input (the inner array containing the same-city grouped objects) and the value of the City
property for the first
element in that array*sort_by(...)
sorts the input array (which is now the one with length-and-city name elements) by the first item (.[0]
), i.e. by the lengthreverse
simply reverses the order of the items of the arrayfirst[1]
then this picks the second item ([1]
) of the first element, which after the reverse-sort will be the length-and-city pair with the highest length*during the interactive session, I'd just guessed that there would be a first
function, and there was!
For those of you wondering, I deliberately chose to reverse the list before picking out the first element, so the element would be at the top and therefore visible in ijq
's output window:
| sort_by(.[0])
| reverse
| first[1]
But I could have just as well done this:
| sort_by(.[0])
| last[1]
As a kind fellow rightly pointed out in the comments to my previous post JSON object values into CSV with jq - TIMTOWTDI, or "there is more than one way to do it", an adage from the Perl community.
]]>In our current Back to basics: OData series on the Developer Advocates' Hands-on SAP Dev show I'm using various aspects of the classic Northwind OData service:
https://services.odata.org/V4/Northwind/Northwind.svc/
There's an entityset that I wanted to grab the data from, but have in CSV form. It's the Customer_and_Suppliers_by_Cities entityset that looks like this:
{
"@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Customer_and_Suppliers_by_Cities",
"value": [
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
},
{
"City": "MĆ©xico D.F.",
"CompanyName": "Ana Trujillo Emparedados y helados",
"ContactName": "Ana Trujillo",
"Relationship": "Customers"
},
{
"City": "MĆ©xico D.F.",
"CompanyName": "Antonio Moreno TaquerĆa",
"ContactName": "Antonio Moreno",
"Relationship": "Customers"
}
]
}
I've reduced the actual representation down to just three entries to save space here, and will retrieve these three entries only (with OData's
$top
system query option) to keep the display of data in this blog post under control.
I wanted to turn this JSON into something like this:
"Berlin","Alfreds Futterkiste","Maria Anders","Customers"
"MĆ©xico D.F.","Ana Trujillo Emparedados y helados","Ana Trujillo","Customers"
"MĆ©xico D.F.","Antonio Moreno TaquerĆa","Antonio Moreno","Customers"
As I'm trying to learn more about jq I thought I'd use that.
Before doing anything else, I grab the representation into a local file called entities.json
like this (restricting the entities to the first three):
curl \
--url 'https://services.odata.org/v4/northwind/northwind.svc/Customer_and_Suppliers_by_Cities?$top=3' \
> entities.json
OK, so it's the values in the objects that I want, in the value
property. So I start out with the simple object identifier-index like this:
jq '.value' entities.json
This gives me:
[
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
},
{
"City": "MĆ©xico D.F.",
"CompanyName": "Ana Trujillo Emparedados y helados",
"ContactName": "Ana Trujillo",
"Relationship": "Customers"
},
{
"City": "MĆ©xico D.F.",
"CompanyName": "Antonio Moreno TaquerĆa",
"ContactName": "Antonio Moreno",
"Relationship": "Customers"
}
]
The value of the value
property is indeed an array of objects. So far so good. But I want to do something with each of those objects, so next I add the array value iterator ([]
) thus:
jq '.value[]' entities.json
This results in something that looks almost but not quite the same:
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
{
"City": "MĆ©xico D.F.",
"CompanyName": "Ana Trujillo Emparedados y helados",
"ContactName": "Ana Trujillo",
"Relationship": "Customers"
},
{
"City": "MĆ©xico D.F.",
"CompanyName": "Antonio Moreno TaquerĆa",
"ContactName": "Antonio Moreno",
"Relationship": "Customers"
}
What's happening here is that the iterator causes jq
to emit a JSON value for each item in the array. This is an important concept and also relates to the fact that jq
can process -- as well as emit -- multiple JSON values. I discuss this in the post Some thoughts on jq and statelessness which you may be interested to read.
In other words, while the first invocation (.value
) emitted a single JSON value (an array), the second (.value[]
) caused three JSON values (three objects) to be emitted, effectively one at a time.
To understand what the iterator does, we can run a simple experiment. We'll revisit each of the two jq
invocations we've done so far, adding a second filter into the mix via the pipe. We'll use the simple length
filter which, given an array will return the number of elements, and given an object will return the number of key-value pairs (and, for that matter, given a string, will return the length of that string).
Remember that the value of the value
property is an array. So this:
jq '.value | length' entities.json
returns the following:
3
Doing the same with the second invocation, in other words this:
jq '.value[] | length' entities.json
has a slightly different output:
4
4
4
In this case, what's happening is that the array iterator is causing each element of the array to be passed, one at a time, through the filter(s) that follow (i.e. through length
in this case). And when passed an object, length
returns the number of key-value pairs. There are four key-value pairs in each of the objects, so we get 4, but we get that three times, one for each object (and remember, each of these instances of 4
are valid JSON values).
So that we better understand where we're heading, I want to introduce the @csv
format string, which is described as follows:
The input must be an array, and it is rendered as CSV with double quotes for strings, and quotes escaped by repetition.
So this:
echo '[1,2,"buckle my shoe"]' | jq --raw-output '@csv'
(note the use of the --raw-output
(-r
) option so that jq
won't try to emit JSON values but instead output the values directly) results in CSV like this:
1,2,"buckle my shoe"
So our aim is to produce a list of arrays, one for each JSON object in the input (one for each entity, effectively). Then each of these arrays can then be fed through the @csv
format string, to produce CSV records.
Each CSV record needs four values, the values for each key in the object(s):
{
"City": "MĆ©xico D.F.",
"CompanyName": "Antonio Moreno TaquerĆa",
"ContactName": "Antonio Moreno",
"Relationship": "Customers"
}
In other words, values for City
, CompanyName
, ContactName
and Relationship
.
The simplest way to do this would be to just use object identifier-indices directly, something like this:
jq --raw-output '
.value[]
| [.City, .CompanyName, .ContactName, .Relationship]
| @csv
' entities.json
This gives us what we want:
"Berlin","Alfreds Futterkiste","Maria Anders","Customers"
"MĆ©xico D.F.","Ana Trujillo Emparedados y helados","Ana Trujillo","Customers"
"MĆ©xico D.F.","Antonio Moreno TaquerĆa","Antonio Moreno","Customers"
But of course that's somewhat unsatisfactory. We'd have to examine the input data and then adjust the object identifier-indices each time we had different input data.
According to the great Larry Wall, the three great virtues of a programmer are laziness, impatience and hubris. And we can get a little nearer to laziness and also somewhat to impatience here by striving to make our solution determine the keys automatically.
There's a keys function in jq
which will return the keys of an object. That might get us part of the way. Let's try it out:
jq '
.value[]
| keys
' entities.json
This produces what we expect, or at least hope for:
[
"City",
"CompanyName",
"ContactName",
"Relationship"
]
[
"City",
"CompanyName",
"ContactName",
"Relationship"
]
[
"City",
"CompanyName",
"ContactName",
"Relationship"
]
In fact, we have the structure that we want and are now only really one "indirection" away from our goal. Let's put this into the CSV output context to see, by piping the result into @csv
:
jq --raw-output '
.value[]
| keys
| @csv
' entities.json
This gives us:
"City","CompanyName","ContactName","Relationship"
"City","CompanyName","ContactName","Relationship"
"City","CompanyName","ContactName","Relationship"
We can make use of these key values like City
with the object identifier-index construct. Well, almost. We need the more generic form for which the object identifier-index is just a shorthand version for when identifiers are simple and "string-like".
In other words, the generic object index can be used when the identifier is not "string-like" ... such as when it's a variable.
Let's step back and focus for a moment on just one of the objects - the first (0th) one - using the array index construction ([n]
):
jq '
.value[0]
| keys
' entities.json
This gives us:
[
"City",
"CompanyName",
"ContactName",
"Relationship"
]
Let's assign the keys to a variable $k
, and just emit the value of that variable:
jq '
.value[0]
| keys as $k | $k
' entities.json
Perhaps unsurprisingly, this gives us the same result:
[
"City",
"CompanyName",
"ContactName",
"Relationship"
]
But now we have an array of key names to work with!
Note that keys
produces an array, so we can use the array value iterator ([]
) to cause each of the keys to be emitted separately (looped through, effectively) and passed to subsequent filters.
Adding the iterator []
to the keys
function like this:
jq '
.value[0]
| keys[] as $k | $k
' entities.json
produces this:
"City"
"CompanyName"
"ContactName"
"Relationship"
This is a similar effect to what we've seen earlier; it causes jq
to iterate over the output of keys
one item at a time, so the $k
after the pipe in this sample is called four times, one for each key, and each time producing a JSON value (the key name as a string) as output.
We may be focusing deeper and deeper on the keys here, but don't forget we always have the identity filter (.
) to give us access to the input, to whatever came through the pipe to where we are now, as it were.
Let's understand this, by way of something perhaps unexpected. Replacing the $k
at the end of the pipeline with simply .
, like this:
jq '
.value[0]
| keys[] as $k | .
' entities.json
actually gives us this:
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
Odd, the same object, four times. But when we stare at that for a second, we realise that it's exactly what we asked for. With keys[]
we're iterating through the keys of the object (City
, CompanyName
, ContactName
and Relationship
). Four of them. So whatever is beyond the pipe after that, which is simply the identity filter (.
), is being called four times. And the identity filter (which simply outputs whatever it receives as input) receives as input the original object.
What we might expect .
to output is one key, each time. That would be the case if we didn't assign keys[]
to the variable $k
with keys[] as $k
. Let's remove the as $k
bit to see:
jq '
.value[0]
| keys[] | .
' entities.json
This produces:
"City"
"CompanyName"
"ContactName"
"Relationship"
So in this case, .
's input are (each time) the keys of the object. The important thing to realise here is that the variable assignment as $k
means that the input that came into that expression (the object) passes straight through unconsumed to the next filter. This part of the manual for the section on variables helps to explain:
The expression
exp as $x | ...
means: for each value of expressionexp
, run the rest of the pipeline with the entire original input, and with$x
set to that value. Thusas
functions as something of a foreach loop.
With this in mind, we should now be able to understand why this:
jq '
.value[0]
| keys[] as $k | .
' entities.json
produced four identical copies of the object.
While that's an odd thing to produce, it helps a lot here. Having the input at this stage in the pipeline (.
) set to the object, combined with the "foreach loop" (as the manual described it) iterating over the values in $k
, is very useful!
Let's look at that in a basic form; how about emitting an array with two elements, the first being the value of $k
and the second being the input, each time:
jq '
.value[0]
| keys[] as $k | [$k, .]
' entities.json
This gives us a combination of values like this:
[
"City",
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
]
[
"CompanyName",
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
]
[
"ContactName",
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
]
[
"Relationship",
{
"City": "Berlin",
"CompanyName": "Alfreds Futterkiste",
"ContactName": "Maria Anders",
"Relationship": "Customers"
}
]
And of course, look what we can do with that combination of data in .
and $k
, using the generic object index like this .[$k]
to look up the value of each of the keys:
jq '
.value[0]
| keys[] as $k | .[$k]
' entities.json
This results in:
"Berlin"
"Alfreds Futterkiste"
"Maria Anders"
"Customers"
Great! And if we wrap this entire expression in an array construction ([...]
), we then have the right shape (an array) to give to the @csv
format string (and as we're emitting CSV again we'll use the --raw-output
option again here):
jq --raw-output '
.value[0]
| [ keys[] as $k | .[$k] ]
| @csv
' entities.json
This produces a perfect single CSV record:
"Berlin","Alfreds Futterkiste","Maria Anders","Customers"
Now all we need to do is remove the array index (the 0
from .value[0]
) to go back to an iteration over all the items in the array:
jq --raw-output '
.value[]
| [ keys[] as $k | .[$k] ]
| @csv
' entities.json
and we get exactly what we're looking for:
"Berlin","Alfreds Futterkiste","Maria Anders","Customers"
"MĆ©xico D.F.","Ana Trujillo Emparedados y helados","Ana Trujillo","Customers"
"MĆ©xico D.F.","Antonio Moreno TaquerĆa","Antonio Moreno","Customers"
I'm likely to want to use this approach again some time, so I'll store the core construct here as a function in my local ~/.jq
file (see the modules section of the manual for more detail):
def onlyvalues: [ keys[] as $k | .[$k] ];
Now I can use that function wherever I want; here's a great place, because it also simplifies the entire invocation:
jq --raw-output '
.value[]
| onlyvalues
| @csv
' entities.json
And yes, this produces the same output:
"Berlin","Alfreds Futterkiste","Maria Anders","Customers"
"MĆ©xico D.F.","Ana Trujillo Emparedados y helados","Ana Trujillo","Customers"
"MĆ©xico D.F.","Antonio Moreno TaquerĆa","Antonio Moreno","Customers"
This turned out to be a longer post than I'd intended to write. I found that I wanted to make sure I explained each part of the solution, and why it was how it was. Of course, this has the benefit of causing me to think a little harder about what jq
is doing, which in turn helps me learn a little bit more about it.
gh
CLI it was easy to grab the names, and gave me the opportunity to practise a bit of jq
. Here's what I did.
The SAP-samples organisation on GitHub is where we keep lots of sample code, configuration and more for various SAP services and products. We also store our workshop and CodeJam material in repositories there too.
There's a sort of loose naming convention, where the first part of the name gives a general indication of topic. For example, the first part of the cloud-messaging-handsonsapdev repository, "cloud", gives an indication that the topic is the cloud in general, and the first part of the btp-setup-automator repository, "btp", indicates that the main topic is the SAP Business Technology Platform.
I wanted to find out what the names were of all the repositories in the SAP-samples organisation, and understand the distribution across the different topics. Something like this, showing here that the most popular topic is "cloud":
1 abap
1 artifact
2 btp
3 cloud
2 sap
1 ui5
Requesting the names of public repositories with the GitHub CLI gh is easy. Here's an example:
gh repo list SAP-samples --limit 10 --public
This produces output something like this (output somewhat redacted for display purposes):
SAP-samples/cloud-sdk-js This re... public 7h
SAP-samples/cloud-cap-samples-java A sampl... public 15h
SAP-samples/btp-setup-automator Automat... public 15h
SAP-samples/btp-ai-sustainability-bootcamp This gi... public 15h
SAP-samples/cloud-cap-samples This pr... public 17h
SAP-samples/ui5-exercises-codejam Materia... public 19h
SAP-samples/cap-sflight Using S... public 1d
SAP-samples/cloud-cf-feature-flags-sample A sampl... public 1d
SAP-samples/cloud-espm-cloud-native Enterpr... public 2d
SAP-samples/iot-edge-samples Showcas... public 2d
This is a slightly contrived example, because I wanted to illustrate the distribution over a small number of repositories (10 in this case). To this end, I cut down the actual output to come up with a list of repositories that would illustrate the point. If you want to find out what I did with this list, and how I turned it into what
gh
would output, in particular what JSON structure it would produce (see the next section in this post), you may want to read the "prequel" post to this one: Converting strings to objects with jq.
With regular shell tools I could parse out the names, split off the topic prefix, and go from there. But I'm trying to improve my skills in jq, and the gh
CLI gives me an opportunity to do that, with the combination of two options.
With --json
I can specify fields I want to have returned to me. At first I was at a loss as to which fields were available to specify, but leaving off the value for --json
gives a list.
In other words, invoking this:
gh repo list --json
results in a list like this (cut short for brevity):
Specify one or more comma-separated fields for `--json`:
assignableUsers
codeOfConduct
contactLinks
createdAt
defaultBranchRef
deleteBranchOnMerge
description
diskUsage
forkCount
...
The field name
is available, and applying it as the value for --json
like this:
gh repo list SAP-samples --limit 10 --public --json name
gives this JSON output:
[
{
"name": "cloud-sdk-js"
},
{
"name": "cloud-cap-samples-java"
},
{
"name": "btp-setup-automator"
},
{
"name": "btp-ai-sustainability-bootcamp"
},
{
"name": "cloud-cap-samples"
},
{
"name": "ui5-exercises-codejam"
},
{
"name": "cap-sflight"
},
{
"name": "cloud-cf-feature-flags-sample"
},
{
"name": "cloud-espm-cloud-native"
},
{
"name": "iot-edge-samples"
}
]
With the --jq
option, a jq filter can be supplied that will be applied to the JSON output produced. Let's start with a very simple example.
As we can see, the structure returned is an array of objects, each containing the property or properties requested with the --json
option. So to obtain the value of each of the name
properties from the JSON output that we saw earlier, we can use .[] | .name
, or, more succinctly, .[].name
:
gh repo list SAP-samples --limit 10 --public \
--json name \
--jq .[].name
This returns the following:
artifact-of-the-month
cloud-sdk-js
sap-tech-bytes
cloud-cap-samples-java
btp-setup-automator
btp-ai-sustainability-bootcamp
sap-iot-samples
abap-platform-fundamentals-01
cloud-cap-samples
ui5-exercises-codejam
We can make one side observation here. Normally, we'd expect to see JSON values output from jq
; in other words, double-quoted strings like this:
"artifact-of-the-month"
"cloud-sdk-js"
"sap-tech-bytes"
"cloud-cap-samples-java"
"btp-setup-automator"
"btp-ai-sustainability-bootcamp"
"sap-iot-samples"
"abap-platform-fundamentals-01"
"cloud-cap-samples"
"ui5-exercises-codejam"
So it seems like when a jq
filter is applied via the --jq
option to gh
, it's applied with the --raw-output
(-r
) option implicitly. I think that makes sense, especially if the output is to be used with other Unix command line tools later on in a pipeline.
Now we have the context in which we can invoke a jq filter on the JSON output from gh
, let's dig in a little more. Bear in mind that this may not be the most efficient way of doing things, but I thought it might still be useful, and it certainly helps me to try to express something in jq in public, as it were.
To be kind to the API, I'll grab the JSON output from the gh
invocation and use that while I build up the filter:
gh repo list SAP-samples --limit 10 --public \
--json name \
> names.json
As a reminder, the content of names.json
will look like this:
[
{
"name": "cloud-sdk-js"
},
{
"name": "cloud-cap-samples-java"
},
{
"name": "btp-setup-automator"
},
{
"name": "btp-ai-sustainability-bootcamp"
},
{
"name": "cloud-cap-samples"
},
{
"name": "ui5-exercises-codejam"
},
{
"name": "cap-sflight"
},
{
"name": "cloud-cf-feature-flags-sample"
},
{
"name": "cloud-espm-cloud-native"
},
{
"name": "iot-edge-samples"
}
]
The convention is to use dashes to separate the different parts of the repository names, so it occurs to me that I can use split, which produces an array, and then grab the first element.
Let's have a first go, based on the name
property access we saw earlier:
jq '.[].name | split("-") | .[0]' names.json
This produces the following list:
"artifact"
"cloud"
"sap"
"cloud"
"btp"
"btp"
"sap"
"abap"
"cloud"
"ui5"
In jq, there are plenty of functions that operate on arrays, such as sort, min and max and reverse. There's also group-by which is what will be useful to our requirements here. The manual's description is as follows:
group_by(.foo)
takes as input an array, groups the elements having the same.foo
field into separate arrays, and produces all of these arrays as elements of a larger array, sorted by the value of the.foo
field.
We're starting from an array (note the outer enclosing [...]
in the data we're working on) so it makes sense to try to keep that array context. So rather than use the array / object iterator, which "explodes" an array into separate results, we can use map here:
jq 'map(.name | split("-") | .[0])' names.json
This produces the same values, but within an array:
[
"artifact",
"cloud",
"sap",
"cloud",
"btp",
"btp",
"sap",
"abap",
"cloud",
"ui5"
]
Now we can use group-by on this (switching here to a multi-line version for better readability):
jq \
'map(.name | split("-") | .[0])
| group_by(.)' \
names.json
This seems to "do exactly what it says on the tin":
[
[
"abap"
],
[
"artifact"
],
[
"btp",
"btp"
],
[
"cloud",
"cloud",
"cloud"
],
[
"sap",
"sap"
],
[
"ui5"
]
]
Note that the value passed to group_by
is .
, i.e. the path_expression
is the entire string value, for example "artifact"
, "cloud"
, "sap"
etc.
Great. We can already start to see the distribution of topics now, but let's go a bit further.
I think ideally I'd like a flat list of topics with their counts, in a tab-separated list, as that is then conducive to further processing on the command line should I want to. In other words, I want this sort of line for each topic:
[count][tab][topic-name]
First, let's produce the raw data for this list. While we wanted to avoid exploding the array earlier, now would be the time to use the array / object iterator:
jq \
'map(.name | split("-") | .[0])
| group_by(.)
| .[]' \
names.json
This produces a JSON value for each of the array items. Here, each item, and thus value produces, is an array containing one or more instances of a topic name:
[
"abap"
]
[
"artifact"
]
[
"btp",
"btp"
]
[
"cloud",
"cloud",
"cloud"
]
[
"sap",
"sap"
]
[
"ui5"
]
In effect, this removes the outermost [...]
array that contains all these inner arrays.
Now it's just a matter of defining what we want to see, with the array constructor, in this case, two elements representing the length of the array, and the first value of the array [length, .[0]]
:
jq \
'map(.name | split("-") | .[0])
| group_by(.)
| .[]
| [length, .[0]]' \
names.json
Remember that this construct .[] | ...
will iterate through each array element and pass them one at a time to the filter that follows the pipe. And this produces the following:
[
1,
"abap"
]
[
1,
"artifact"
]
[
2,
"btp"
]
[
3,
"cloud"
]
[
2,
"sap"
]
[
1,
"ui5"
]
We have our list of topic counts, so now let's add the final touch to have a tab-separated list. There's nothing further we need to do to the data, it's as we want it. So we just need some formatting. In the Format strings and escaping section of the jq manual, we see that there's the @tsv
which is described thus:
The input must be an array, and it is rendered as TSV (tab-separated values). Each input array will be printed as a single line.
This is exactly what we're looking for. Note that here, the "input array" referred to is each of the individual arrays in the output above, i.e. this is the first array:
[
1,
"abap"
]
Let's try it:
jq \
'map(.name | split("-") | .[0])
| group_by(.)
| .[]
| [length, .[0]]
| @tsv' \
names.json
"1\tabap"
"1\tartifact"
"2\tbtp"
"3\tcloud"
"2\tsap"
"1\tui5"
Close! Remember that by default, an invocation of jq
on the command line will output JSON values by default. These strings are JSON values. But here we want the raw form, via the --raw-output
(-r
), to benefit from (and see) the tab characters (\t
) that the @tsv
has put in for us:
jq -r \
'map(.name | split("-") | .[0])
| group_by(.)
| .[]
| [length, .[0]]
| @tsv' \
names.json
This gives us what we're looking for:
1 abap
1 artifact
2 btp
3 cloud
2 sap
1 ui5
And in fact, remembering that when a jq filter is invoked from gh
via the --jq
option the raw output is used by default, we can now put everything together and benefit from that in the final gh
invocation, which looks like this:
gh repo list SAP-samples --limit 10 --public \
--json name \
--jq \
'map(.name | split("-") | .[0])
| group_by(.)
| .[]
| [length, .[0]]
| @tsv'
This gives us the same result, i.e.:
1 abap
1 artifact
2 btp
3 cloud
2 sap
1 ui5
So I can see that the most common topic here is "cloud".
I'm happy with this approach, how I'm starting to get a better feel for how data flows through a jq filter, and also that I can use such filters with the GitHub CLI.
]]>gh
CLI). I achieved this with a short jq filter. Here's how, recorded here, with my working thoughts, mostly for my outboard memory.
For that other blog post I wanted to start with a list of repositories from GitHub. The list produced by the command I was invoking (gh repo list SAP-samples --limit 10 --public
) was fine but to illustrate the wider point of the post I wanted to select specific repository names. So I ended up with a manually edited list like this, in a file called names.txt
:
SAP-samples/cloud-sdk-js
SAP-samples/cloud-cap-samples-java
SAP-samples/btp-setup-automator
SAP-samples/btp-ai-sustainability-bootcamp
SAP-samples/cloud-cap-samples
SAP-samples/ui5-exercises-codejam
SAP-samples/cap-sflight
SAP-samples/cloud-cf-feature-flags-sample
SAP-samples/cloud-espm-cloud-native
SAP-samples/iot-edge-samples
What I wanted was a JSON version of this, where each repository name, minus the organisation prefix (SAP-samples/
), was represented in a name
property in an object, with all of them wrapped in an outer array, like this:
[
{
"name": "cloud-sdk-js"
},
{
"name": "cloud-cap-samples-java"
},
{
"name": "btp-setup-automator"
},
{
"name": "btp-ai-sustainability-bootcamp"
},
{
"name": "cloud-cap-samples"
},
{
"name": "ui5-exercises-codejam"
},
{
"name": "cap-sflight"
},
{
"name": "cloud-cf-feature-flags-sample"
},
{
"name": "cloud-espm-cloud-native"
},
{
"name": "iot-edge-samples"
}
]
First off, the content of the text file is lines of raw text, so I'll need to use the --raw-input
(-R
) option to tell jq
that.
Incidentally, if the lines of the file had been like this (where each line was enclosed in double quotes):
"SAP-samples/cloud-sdk-js"
"SAP-samples/cloud-cap-samples-java"
"SAP-samples/btp-setup-automator"
...
then I wouldn't have needed this option, as these lines are all valid JSON values (a double-quoted string is a valid JSON value).
While thinking of command line options, I then considered the --slurp
(-s
) option. This is because I was thinking about gathering up the entire input to pass through the filter once, because I needed the final result to be enclosed in a single, outer array. For more on slurping and statelessness, you may like to read Some thoughts on jq and statelessness.
What I noticed is that --slurp
has a very specific effect when used with the --raw-input
option, as described in the manual - see the second sentence here:
--raw-input
: Don't parse the input as JSON. Instead, each line of text is passed to the filter as a string. If combined with--slurp
, then the entire input is passed to the filter as a single long string.
This would be a way to read all the repository names in at once, which would give me a chance to output them, transformed, in an enclosing array.
So let's start by looking at the effect of the combination of these two options, when processing the input data with the simple identity filter (.
). With this invocation:
jq -s -R . names.txt
we get this, a single string:
"SAP-samples/cloud-sdk-js\nSAP-samples/cloud-cap-samples-java\nSAP-samples/btp-setup-automator\nSAP-samples/btp-ai-sustainability-bootcamp\nSAP-samples/cloud-cap-samples\nSAP-samples/ui5-exercises-codejam\nSAP-samples/cap-sflight\nSAP-samples/cloud-cf-feature-flags-sample\nSAP-samples/cloud-espm-cloud-native\nSAP-samples/iot-edge-samples\n"
At first I thought I could simply then separate the names by using split
to chop up on what looked to be a newline (\n
) character separating each one; this would be ideal as split
produces an array, which is exactly what I'm looking for:
jq -s -R 'split("\n")' names.txt
But this wasn't quite right, producing this:
[
"SAP-samples/cloud-sdk-js",
"SAP-samples/cloud-cap-samples-java",
"SAP-samples/btp-setup-automator",
"SAP-samples/btp-ai-sustainability-bootcamp",
"SAP-samples/cloud-cap-samples",
"SAP-samples/ui5-exercises-codejam",
"SAP-samples/cap-sflight",
"SAP-samples/cloud-cf-feature-flags-sample",
"SAP-samples/cloud-espm-cloud-native",
"SAP-samples/iot-edge-samples",
""
]
What's that random empty string at the end?
Turns out that I wasn't staring hard enough at the single string; the newline characters weren't used to "join" each string, they were just there because each of the strings themselves included a newline.
In other words, they weren't separators, they were just part of the data, and so the last newline at the end of the last string "SAP-samples/iot-edge-samples" meant that split
would produce a final empty value, i.e. what it found to the right of the last newline character, as we can see in the last array position above (""
).
Of course, I was tempted to munge the input data before even feeding it to jq
, so each repository name would be a valid JSON value. I would do this by enclosing each of them in double quotes. But that wasn't what I was looking to do here, I wanted to use jq
on its own.
Another way would be just to ignore the last value in the array, like this:
jq -s -R 'split("\n") | .[:-1]' names.txt
This makes use of the array slice, where the second filter .[:-1]
says to return all the array elements up to but not including the last one, producing the basics of what we're looking for:
[
"SAP-samples/cloud-sdk-js",
"SAP-samples/cloud-cap-samples-java",
"SAP-samples/btp-setup-automator",
"SAP-samples/btp-ai-sustainability-bootcamp",
"SAP-samples/cloud-cap-samples",
"SAP-samples/ui5-exercises-codejam",
"SAP-samples/cap-sflight",
"SAP-samples/cloud-cf-feature-flags-sample",
"SAP-samples/cloud-espm-cloud-native",
"SAP-samples/iot-edge-samples"
]
While this would be perfectly practical, creating and then removing unwanted data elements didn't feel entirely agreeable to me today, so I looked for another approach.
On my walk, thinking about this, I decided to see if there were any approaches that didn't involve the use of the --slurp
option. And there was, in the form of inputs, which, according to the manual:
outputs all remaining inputs, one by one.
This suggested to me that if I were to call inputs
at the start, I'd likely get all but the first string, and this was the case:
jq -R inputs names.txt
This produced this:
"SAP-samples/cloud-cap-samples-java"
"SAP-samples/btp-setup-automator"
"SAP-samples/btp-ai-sustainability-bootcamp"
"SAP-samples/cloud-cap-samples"
"SAP-samples/ui5-exercises-codejam"
"SAP-samples/cap-sflight"
"SAP-samples/cloud-cf-feature-flags-sample"
"SAP-samples/cloud-espm-cloud-native"
"SAP-samples/iot-edge-samples"
The first string
"SAP-samples/cloud-sdk-js"
was missing, as it was already "consumed" ... but happily available in .
. So I could construct an array directly at the start of the filter program, like this:
jq -R '[.,inputs]' names.txt
See the end of this post for an update on this.
Lo and behold, it seems that this is exactly the sort of thing I'm looking to start with:
[
"SAP-samples/cloud-sdk-js",
"SAP-samples/cloud-cap-samples-java",
"SAP-samples/btp-setup-automator",
"SAP-samples/btp-ai-sustainability-bootcamp",
"SAP-samples/cloud-cap-samples",
"SAP-samples/ui5-exercises-codejam",
"SAP-samples/cap-sflight",
"SAP-samples/cloud-cf-feature-flags-sample",
"SAP-samples/cloud-espm-cloud-native",
"SAP-samples/iot-edge-samples"
]
Now that I had the basic structure, it was then just a matter of modifying each element, from a string to an object. Moreover, given that I had the elements where I wanted them, in an outer array, it would seem sensible at this point onwards to express the transformations required via map, which (like map
in other languages, I guess it's as much of a paradigm as it is a function or filter), takes an array and produces an array.
So for example, I could replace each string with its length, while still keeping the structure, by passing the [.,inputs]
into map
like this:
jq -c -R '[.,inputs] | map(length)' names.txt
This would produce the following (note I've used the --compact-output
(-c
) option to save space here):
[24,34,31,42,29,33,23,41,35,28]
In the modification requirements, I first had to remove the SAP-samples/
organisation name prefix, and I turned to sub for that, as I'm partial to the occasional regular expression:
jq -R '[.,inputs] | map(sub("^.+/";""))' names.txt
Mapping the substitution of ^.+/
(anchored at the start of the line, at least one but possibly more characters, up to and including a forward slash) with nothing (""
) gives this:
[
"cloud-sdk-js",
"cloud-cap-samples-java",
"btp-setup-automator",
"btp-ai-sustainability-bootcamp",
"cloud-cap-samples",
"ui5-exercises-codejam",
"cap-sflight",
"cloud-cf-feature-flags-sample",
"cloud-espm-cloud-native",
"iot-edge-samples"
]
The second transformation was to make the simple string value into the value for a property called name
, within an object.
So for the first string
"cloud-sdk-js"
I wanted this:
{
"name": "cloud-sdk-js"
}
Similar to the array construction there's also the object construction, with which objects can be created on the fly quite easily. And as the manual says:
If the keys are "identifier-like", then the quotes can be left off
So I can use name
rather than "name"
for the property, reducing the JSON noise a little:
jq -R '[.,inputs] | map(sub("^.+/";"")) | map({name: .})' names.txt
This produces:
[
{
"name": "cloud-sdk-js"
},
{
"name": "cloud-cap-samples-java"
},
{
"name": "btp-setup-automator"
},
{
"name": "btp-ai-sustainability-bootcamp"
},
{
"name": "cloud-cap-samples"
},
{
"name": "ui5-exercises-codejam"
},
{
"name": "cap-sflight"
},
{
"name": "cloud-cf-feature-flags-sample"
},
{
"name": "cloud-espm-cloud-native"
},
{
"name": "iot-edge-samples"
}
]
Actually we can reduce the filter a little here, by including the object construction within the first map
, like this:
jq -R '[.,inputs] | map(sub("^.+/";"") | {name: .})' names.txt
and it produces exactly the same thing. And what it produces, is what we're looking for.
So there we are, I can now produce a simulation of what gh
's JSON output creates, from a flat list of simple strings, using a modest filter with jq
. Of course, there are other ways of achieving this, but I'm happy with this for now.
There is some brief discussion of this post on Hacker News and Lobsters.
Update: in the middle of the night last night, after publishing this post, I woke up and suddenly realised that I could make this even neater, by the use of the --null-input
(-n
) option, which is described as follows:
Don't read any input at all! Instead, the filter is run once using null as the input.
That in turn means that I could avoid the two-item list of .
and inputs
, and simply have:
jq -R -n '[inputs]' names.txt
I do still have a place in my heart for [.,inputs]
because it reminds me of the fundamental "first and rest", or "head and tail" concept from functional programming. See the "Subsequent understanding" section in The beauty of recursion and list machinery for more on this, if you're interested.
I do like articles like this, that show and lay out the thinking behind the conclusion, and along the way, impart knowledge about the topic at hand. Especially when they're on a subject I'm eager to learn more about.
While reading the article a couple of things struck me.
First, I'd not really heard of the phrase "stateless dataflow" (and its opposite "stateful dataflow"). I did look it up via Google and found that there were very few results, most of them being scholarly papers either in PDF or even PostScript form. So I sort of forgave myself for not really knowing what was implied, although I had taken a guess anyway.
Basically the author was explaining that the reason for finding the jq
language difficult was down to the computational model. I don't think jq
is the easiest language, and in my experience so far that is down to a number of things, not least the relative terseness of the official manual, but also my inability to grasp powerful constructs, as well as having to manipulate complex object and array structures in my head, not only statically, but also having to imagine how they might change when processed through filters.
It seems that the author's issue with the "stateless dataflow" was down to the fact that what's being processed by jq
is very often a stream of discrete JSON values, rather than a single value.
So what do I mean by "JSON value"? Well, in the article Introducing JSON there's a McKeeman form expressing the JSON grammar, and the building blocks of what we know as JSON are described as "JSON values" thus:
value
object
array
string
number
"true"
"false"
"null"
These JSON values are described as fundamental building blocks in RFC 8259.
Anything expressed in JSON will be one of these value types. This is why, for example, "hello world"
is valid JSON, as is 42
.
In the "Invoking jq" section of the manual, it says:
jq filters run on a stream of JSON data. The input to jq is parsed as a sequence of whitespace-separated JSON values which are passed through the provided filter one at a time. The output(s) of the filter are written to standard out, again as a sequence of whitespace-separated JSON data.
Key for me, in my journey towards a deeper understanding of jq
, is that the "filter" here is the entire jq
program, whether that's something short expressed literally on the command line, or in a file, pointed to with the --from-file
or -f
option.
So each and every JSON value that is passed into jq
is processed by the entire program.
There's the "slurp" option (with --slurp
or -s
) which will "read the entire input stream into a large array and run the filter just once". This is maybe what one might initially assume or expect jq
to do, but one needs to be explicit.
Perhaps a small example might help, based on a sequence of JSON values that we can produce with seq:
seq 3
produces:
1
2
3
If we pass this sequence of JSON values through the simplest of jq
filters -- the identity function -- like this:
seq 3 | jq .
then we get this:
1
2
3
One might think "well, what else would you expect?" but this illustrates the nature of running discrete JSON values through a filter quite nicely.
Before we continue, let's use the --compact-output
(or -c
) option here:
seq 3 | jq -c .
The output is the same:
1
2
3
For me, this drives home the "discrete JSON values" approach to both jq
's input and output processing - there are three discrete values in, and three out.
I guess this also helps explain what the author of the article means by "stateless". As far as the filter is concerned, it's seeing the values 1
, 2
and 3
separately and in new contexts each time. And as the article illustrates, this is where jq
's --slurp
(or -s
) option comes in. Adding the option to the above example:
seq 3 | jq -c -s .
produces this:
[1,2,3]
A single JSON value. This is because what the filter received was actually this:
[
1,
2,
3
]
Three discrete values, but wrapped in an outer enclosing array. A single JSON value, in the form of an array. And being the simple identity function, just regurgitating what it reads, produces in turn that same, single JSON value as output. On one line here, rather than pretty printed with more whitespace, because of the -c
option.
The --slurp
option brings about a sort of statefulness, in that every discrete JSON value, previously independent, now share the same single context of the single invocation of the jq
filter.
Changing the filter from the .
identity function to the add
function* demonstrates this singular context, this "statefulness":
seq 3 | jq -s add
This yields the single JSON value:
6
*I'm calling them "functions", but the manual actually calls them "filters"
There's one more observation I'd like to make in these ramblings. The article describes the task of adding up the numbers here:
echo '[1,2,3] [4,5,6]'
In other words, the result should be 21.
We know by now that this:
[1,2,3] [4,5,6]
is actually two discrete JSON values. Two arrays. So, as the author demonstrates, the --slurp
option is called for, thus:
echo '[1,2,3] [4,5,6]' | jq -s '[.[] | add] | add'
So in this invocation, the filter is executed once only, and actually receives:
[
[1,2,3],
[4,5,6]
]
The article does a great job of describing the author's thought process here, and also showing how some of the basic filters work. And I guess the filter used here is possibly deliberately complex, or at least contrived to illustrate a point:
[.[] | add] | add
However to be fair on the language, it has some syntactic sugar in the form of map. In the description, we read:
map(x)
is equivalent to[.[] | x]
. In fact, this is how it's defined.
And we can see this definition in jq
's source, specifically in the builtin.jq file:
def map(f): [.[] | f];
This definition helps the mental model, and helps me a lot, not only to reduce noise, but also to relate the computation to an arguably well-known function (map). So the entire line turns into a much simpler:
echo '[1,2,3] [4,5,6]' | jq -s 'map(add) | add'
This has turned into a bit of a longer ramble, beyond what I'd originally commented. But writing it has helped me think about this a bit more. Perhaps it helps you too - I hope so!
And most importantly, my thoughts in this post should not detract from the article nor from their conclusions with zq - more power to them!
]]>The exercise in question is Atbash Cipher and the features that I wanted to share with you are from user Victor Guthrie's solution.
The first line in Victor's solution is as follows:
alphabet=({a..z})
The concise nature of this is quite striking. There are two mechanisms at play here. The first is the outer brackets (...)
. Brackets are used in different contexts in Bash, but here, without any leading symbol before the opening bracket, and in the context of an assignment to a variable, they represent the definition of an array.
Here's a simple example, with a variable letters
declared thus:
letters=(a b c)
This results in letters
being an array of three elements, the values a
, b
and c
. We can check this as follows:
for letter in ${letters[@]}; do echo "-> $letter"; done
This produces:
-> a
-> b
-> c
So what's the {a..z}
inside of the brackets in this particular case? Well, given the variable name and the a
and the z
we can probably reasonably guess that it's all the letters in the alphabet.
And we'd be right. But what is that construct and how does it work? I find that one of the key aspects of learning Bash and any language is knowing what things are called, so you can research them in the documentation.
And the {...}
construct is called brace expansion, which is described as:
A mechanism by which arbitrary strings may be generated.
There are plenty of other expansion mechanisms in Bash, which are documented in the Shell Expansions section of the manual.
Anyway, this example will expand the characters a
and z
lexicographically, using the default C locale, resulting in every letter of the alphabet. What brace expansion offers is more than this, and I'd recommend you take a quick look at the page in the manual. To give you a taste, you can do things like this:
echo {a,b}{1..3}
which results in:
a1 a2 a3 b1 b2 b3
You can even use numbers, with an optional increment value, like this:
; echo {1..10..2}
the ..2
is the optional increment, and this expands to:
1 3 5 7 9
Nice!
In the main
function of Victor's solution, the first line is this:
local trimmed="${2//[^[:alnum:]]/}"
We've come across some of the constructs here before, but let's break this down to get to something I'd vaguely known about but never could remember how to express it, until now.
The value being assigned to the locally declared variable trimmed
is the result of a shell parameter expansion. Specifically it's this construct, where pattern
is replaced with string
inside of the given parameter
:
${parameter/pattern/string}
In fact, the version we see in the solution is the "all matches" version, where pattern
itself begins with a /
; this is described in the manual thus:
If pattern begins with ā/ā, all matches of pattern are replaced with string.
In other words we should first see this in the expression:
local trimmed="${2//.../}"
By the way, the 2
here refers to the second positional parameter passed into the main
function.
Note that the string
is empty here; in other words, every occurrence of where pattern
is matched is replaced with nothing, i.e. effectively removed.
So what is the pattern? Let's now stare at it for a second:
[^[:alnum:]]
There are two things going on here. Well, three if you count the ^
separately. Working from the outside in, we start with a bracket expression, thus:
[...]
This is simply a list of characters, where any of them can be matched. It's also possible to use a "range expression" instead of a list of characters, so a-c
would match a
, b
or c
. It's even possible to combine range expressions with single characters. For example, this:
fruit=bananas
echo ${fruit//[sa-c]/_}
would result in:
__n_n__
A circumflex (^
) in the first character position of the bracket expression negates the characters listed, so this:
fruit=bananas
echo ${fruit//[^sa-c]/_}
would result in:
ba_a_as
So we can see that
[^[:alnum:]]
has a brace expression which is negating something ([^...]
) but it's neither one or more single characters nor a range expression. It's this:
[:alnum:]
This is a "character class", of which there are several, described in the Character Classes and Bracket Expressions part of the manual, and "alnum" is short for "alphanumeric", basically meaning letters and numbers. The equivalent brace expression for [:alnum:]
would be [0-9a-zA-Z]
.
With this in mind, we now know what's happening with this line:
local trimmed="${2//[^[:alnum:]]/}"
The variable trimmed
is being given the value of $2
(the second positional parameter passed to the function) but with anything that's not a letter or a number removed.
My problem in the past was that I hadn't taken enough time to stare at the different parts of expressions like this, and could therefore not quite remember whether the opening square brackets went together or not. But now I know that the outermost pair is the bracket expression, and the inner pair is the character class, it is now obvious that the ^
negation must go between the opening two square brackets, as they each belong to two completely separate parts.
The final line in the solution that I want to stare at for a moment is this one:
((i < length - 1)) && ((i % 5 == 4)) && output+=' '
In my first solution to this exercise I had a similar approach to adding a space every few characters (this was part of the requirements for the encoding output in the task), but my equivalent line was a lot noisier. Victor's version is cleaner and very pleasant to read.
It uses the double parentheses construct for a couple of arithmetic expression evaluations. As well as regular numeric expressions, shell arithmetic allows for logical expressions too, which is what we see here in both examples, where the operator in the first example is <
("is i less than one less than the value of length?") and the operator in the second example is ==
("is i modulo 5 equal to 4?").
If both of these arithmetic expressions evaluate to true then the final expression
output+=' '
causes a space to be added to the end of the value in output
.
That's about it for this community solution. There's always lots to learn from reading code, and I'm getting a lot out of the community solutions on Exercism. Thanks folks!
]]>I thought I'd write about another Exercism community solution that caught my eye this morning. So I went to my blog repository locally, and thought:
Actually, what I need is an updated version of my old script that sets up a new blog post file, so I can streamline the authoring of a new post.
I've recently moved to 11ty and it's a decent static site generator; it has introduced a slightly new structure, and I'm happy with it so far, but it means I need a slightly different workflow to create a new blog post file.
Anyway, this thought should have been an early warning sign, but I sort of ignored it.
Then, in thinking about what I'd want this script to do, I started to think about what input I'd give it. Initially just the blog post title, perhaps, but then:
What about tags, and how would I specify them? Why don't I choose them from a list? But then how would I determine that list?
The tags in any given post are declared in the frontmatter; here's the frontmatter for the previous post Bash notes 2:
---
layout: post
title: Bash notes 2
tags:
- shell
- til
- exercism
---
I had the idea of pulling out all the tags from all the Markdown files that represented posts. But how would I do that? I quickly descended to the next level down in my yak shaving journey.
I could simply look through each of the files for any line that started with a couple of spaces, had a dash, and then a word. But I couldn't be sure that this approach wouldn't be too eager, and match blog post body content that wasn't tag related. So I thought it best to match those lines where tags:
preceded them.
I had an inkling that something like multiline matching with grep
might help, or even sed
. There was a related question on Stack Overflow to which this answer seemed as intriguing as it was concise:
sed -e '/abc/,/efg/!d' [file-with-content]
The first iteration of translating this into my requirements, and trying it out on the blog post files for this year so far, looks like this:
sed -e '/^tags:/,/---/!d' 2022-*
This gave me the following output:
tags:
- sap-community
---
tags:
- jq
- learning
- bats
- shell
- exercism
---
tags:
- cloudfoundry
- kubernetes
---
tags:
- jq
- functional
- javascript
---
tags:
- shell
- til
- exercism
---
tags:
- shell
- til
- exercism
---
tags:
- shell
---
tags:
- shell
- til
- exercism
---
A second iteration, adding a second instruction /^ - /!d
to search within the results for just the tag lines, looks like this:
sed -e '/^tags:/,/---/!d; /^ - /!d' 2022-*
And this gave me (output reduced for brevity):
- sap-community
- jq
- learning
- bats
- shell
- exercism
- cloudfoundry
- kubernetes
- jq
- functional
- javascript
- shell
- til
- exercism
- shell
- til
- exercism
- shell
- shell
...
So there are two more tasks here - to reduce each line to just the tag name (i.e. to remove the bullet point and spaces) and to deduplicate the list.
As we're already in sed
mode, the first of these reductions might as well be a third instruction, specifically s/ - //
, like this:
sed -e '/^tags:/,/---/!d; /^ - /!d; s/^ - //' 2022-*
This results in:
sap-community
jq
learning
bats
shell
exercism
cloudfoundry
kubernetes
jq
functional
javascript
shell
til
exercism
shell
til
exercism
shell
shell
...
And while we could turn to uniq
to deduplicate the list, we'll have to sort it first anyway, so we might as well use the -u
option to sort
:
sed -e '/^tags:/,/---/!d; /^ - /!d; s/^ - //' 2022-* | sort -u
This gives us what we want, a nice clean, unique list of tags:
bats
cloudfoundry
exercism
functional
javascript
jq
kubernetes
learning
sap-community
shell
til
I can now use this with fzf and its multi select mode to give me the option of choosing one or more tags:
sed -e '/^tags:/,/---/!d; /^ - /!d; s/^ - //' 2022-* | sort -u | fzf -m
This gives me a nice interface like this:
> til
shell
sap-community
learning
kubernetes
jq
javascript
>functional
>exercism
cloudfoundry
>bats
11/11 (3)
(Here, I've selected the three tags functional
, exercism
and bats
, and my selection cursor is currently pointing to til
.)
Great, I can now get on with putting the script together. I'll also need a way to specify a new tag if it's not in the list, but I'll deal with that when I get to it.
But I'm not done with my descent yet. I'm not really sure exactly what the !d
part in the first sed
instruction is, and how it works. So at this point I send the sed manual to my trusty Nexus 9 tablet, and head off to make a cup of coffee to enjoy while reading and learning more about this venerable stream editor that's been around for almost half a century.
I'm further away than ever from writing that post about the Exercism community solution I'd seen, but that's all fine. Yak shaving doesn't feel so bad when you're aware of when you're doing it.
I've had my coffee and read some of the manual. It's now clear to me how the initial sed
invocation works. Here it is in isolation:
/^tags:/,/---/!d
The first thing I needed to realise is that the !
doesn't belong to the d
, it belongs to the part before it.
The sed script overview explains that sed
commands have this structure:
[addr]X[options]
where "addr" is an address and "X" represents the actual command, or operation.
Looking at the Addresses section, we see that there are multiple ways of specifying lines that the given command is to operate upon. The specifications include direct line numbers ("numeric addresses"), and text matching ("regexp addresses"). Moreover, a range can be specified, with the start and end specifications joined with a comma ,
.
This is all fine, and we grokked that in building our sed instructions earlier. But the thing I didn't realise is that the !
character is part of the "addr" specification (not part of the "X" command) and serves to negate whatever address was specified.
In other words, the "addr" part is actually:
/^tags:/,/---/!
which means "all the lines that are NOT in this range". And then the d
command deletes what's specified, i.e. deletes everything apart from sequences like this:
tags:
- shell
- til
- exercism
---
So there you have it.
]]>case
statement, and another solution was rather splendid in its approach and it reminded me a little of some functional programming techniques, or perhaps MapReduce.
I find Exercism great for practice but get as much if not more pleasure and insight from reading the Community Solutions - solutions to exercises that others have completed.
My initial solution to the Scrabble Score exercise was a little pedestrian, which I find acceptable at least as the first iteration, as long as it works. That said, I had been trying to write my solution to reflect, almost visually, the instructions, the core of which was this table:
Letter Value
A, E, I, O, U, L, N, R, S, T 1
D, G 2
B, C, M, P 3
F, H, V, W, Y 4
K 5
J, X 8
Q, Z 10
I'd ended up with this:
declare word="${1^^}"
declare score=0
for ((i = 0; i < ${#word}; i++)); do
[[ "AEIOULNRST" =~ ${word:$i:1} ]] && ((score += 1))
[[ "DG" =~ ${word:$i:1} ]] && ((score += 2))
[[ "BCMP" =~ ${word:$i:1} ]] && ((score += 3))
[[ "FHVWY" =~ ${word:$i:1} ]] && ((score += 4))
[[ "K" =~ ${word:$i:1} ]] && ((score += 5))
[[ "JX" =~ ${word:$i:1} ]] && ((score += 8))
[[ "QZ" =~ ${word:$i:1} ]] && ((score += 10))
done
echo "$score"
It was ok, if not a little "bulky".
In looking at other solutions, I came across one from user Devin Miller which did what I'd been looking to achieve, but in a much neater way:
total=0
for x in $(echo ${1^^} | grep -o .); do
case $x in
[AEIOULNRST]) ((total++));;
[DG]) ((total+=2));;
[BCMP]) ((total+=3));;
[FHVWY]) ((total+=4));;
K) ((total+=5));;
[JX]) ((total+=8));;
*) ((total+=10));;
esac
done
I'd forgotten that the case
statement allows for pattern matching. The Simplified conditions section of the Bash Beginners Guide states: "Each case is an expression matching a pattern". What sort of pattern? Well, the Bash Manual explains, in section 3.5.8.1 on Pattern Matching. In Devlin's solution here, the [...]
construct is used for each case expression, which "matches any of the enclosed characters". Of course! This makes for a much more concise way of expressing that scoring table. I think, for symmetry, I'd have used ((total+=1))
for the first case, just to match the rest, but there you go.
One note on the command substitution in the for
line above. There's nothing in the rules that says that external commands, that would normally and perhaps naturally be part of any Bash script solution (after all, Bash scripts are great for encoding UNIX style constructs) so the use of the external grep
command here is fine. And it's an interesting way to iterate through the letters of the word passed to the scoring script.
The secret is in the -o
option, short for --only-matching
, and the man page describes this option thus:
Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line.
Before we look at that, note that the ${1^^}
parameter expansion results in an uppercased version of the value in $1
.
So if $1
had the value hello
, then the result of echo ${1^^} | grep -o .
would be:
H
E
L
L
O
This feeds nicely into the for ... in
style loop construct used. The effect, ultimately, is the same as the C-style for loop construct I used in my solution where I used a incrementing variable i
to point to each letter of the word in turn, via the ${parameter:offset:length}
style of parameter expansion.
I'd like to dwell briefly on another solution to this exercise, which looks like this:
set -eu
main() {
local -l str="$1"
str=${str//[^[:alpha:]]}
str=${str//[aeioulnrst]/_} # 1
str=${str//[dg]/__} # 2
str=${str//[bcmp]/___} # 3
str=${str//[fhvwy]/____} # 4
str=${str//[k]/_____} # 5
str=${str//[jx]/________} # 8
str=${str//[qz]/__________} # 10
echo ${#str}
}
main "$@"
This is a really interesting approach that appeals to my sense of beauty and intrigue - all the heavy lifting is done with the ${parameter/pattern/string}
style of parameter expansion, specifically the one where all matches are replaced because the pattern actually begins with a /
(i.e. it's ${str//[aeioulnrst]/_}
rather than ${str/[aeioulnrst]/_}
).
What is happening here is that after removing any characters that are not in the "alphabetic" POSIX class (see the POSIX Character Classes section of 18.1. A Brief Introduction to Regular Expressions), the letters are replaced by underscores, where the number of underscores in the replacement reflects the points for that letter. So for example an a
is replaced with _
reflecting a single point for that letter, whereas an f
is replaced with ____
reflecting four points for that letter. After all the replacements are done, the string is just a sequence of underscores, and how many underscores reflects the total number of points for that word (which is reflected in yet another style of parameter expansion, the length of a variable, via ${#parameter}
). Lovely!
I don't know about you, but this sort of reminds me of the underlying philosophy of MapReduce, where the input is reduced to a sequence of simple, countable atoms - in this case, underscore characters. Given the "sequence" feeling that this solution also conveys, I think there's an element of FP philosophy too.
I completed a very basic solution to the Proverb exercise in the Bash track on Exercism and proceeded to look at some of the solutions others had submitted. A beautifully simple and succinct solution from user Glenn Jackman had the most stars, and I wanted to share a few things I learned from it.
Here is the latest iteration of this solution:
#!/usr/bin/env bash
# There must be at least 2 positional parameters
# to enter the loop:
# `i` initialized to 1
# `i < $#` test passes only if 2 or more parameters
for (( i=1, j=2 ; i < $# ; i++, j++ )); do
echo "For want of a ${!i} the ${!j} was lost."
done
# And at least one parameter to print this:
[[ -n $1 ]] && echo "And all for the want of a $1." || :
Here are three things I learned or re-learned.
The Loops and Branches chapter of the Advanced Bash Scripting Guide has examples of a C-style for loop that uses double parentheses. The last of these examples, and what we see here in the script, shows that we can actually initialise and increment more than one variable inside the double parentheses construct. In other words, in this line:
for (( i=1, j=2 ; i < $# ; i++, j++ )); do
we have both i
and j
being initialised and then incremented. This is a really neat solution for maintaining more than one indices. In case you're wondering, the $#
is the number of parameters passed to the script; each parameter is available via their position in variables like this: $1
, $2
, $3
and so on (they're referred to as positional parameters). So if the invocation is scriptname hello world
then $1
is hello
and $2
is world
.
So while you can refer to e.g. the second parameter with $2
, what if you wanted to refer to the nth parameter, where n
was dynamic? This is the case in Glenn's solution; have a close look at this line:
echo "For want of a ${!i} the ${!j} was lost."
Here we want to refer to the "i-th" and the "j-th" parameter, whatever i
and j
are each time round the loop. Using simply a reference like this: $i
would resolve to the value of i
, which would be 1
for example (in the first iteration of the loop). But what we want is the value of the first parameter. This is why we see the !
which introduces a level of indirection. So here, we see ${!i}
and ${!j}
.
What happens is that these both resolve to "the value of the variable name in i
and j
". So in the first iteration of the loop, these then would resolve to the values of $1
and $2
. And in the second iteration, they'd resolve to the values of $2
and $3
.
The last line looks like this:
[[ -n $1 ]] && echo "And all for the want of a $1." || :
The || :
construct may look a little odd. But if one considers what :
does, it makes sense (indeed, it's explained by Glenn in the comments on this iteration of his solution). The :
is the "no operation" command, and I've covered it in a previous blog post - see The no-operation command : (colon). Essentially, it does nothing, successfully. Which means that if the [[ -n $1 ]]
condition is not true (i.e. $1
is empty) then the echo
will not execute, the script will then end anyway, but with a non-zero exit code, and this is not desired.
Using || :
here is like using || true
but perhaps more idiomatic to Bash.
The concept of the reduce function generally is a beautiful thing. I've written about reduce in previous posts on this blog:
Being a predominantly functional language, the fact that jq
has a reduce function comes as no surprise. However, its structure and how it is wielded is a little different from what I was used to. I think this is partly due to how jq
programs are constructed, as pipelines for JSON data to flow through.
I decided to write this post after reading an invocation of reduce
in an answer to a Stack Overflow question, which had this really interesting approach to achieving what was desired:
reduce ([3,7] | to_entries[]) as $i (.; .[$i.key].a = $i.value)
Because my reading comprehension of jq
's reduce
was a little off, I found this difficult to understand at first. But now it's much clearer to me.
When I first read the entry for reduce
in the jq
manual, I found myself scratching my head a little. This is what it says:
The reduce syntax in jq allows you to combine all of the results of an expression by accumulating them into a single answer. As an example, we'll pass [3,2,1] to this expression:
reduce .[] as $item (0; . + $item)
For each result that .[] produces, . + $item is run to accumulate a running total, starting from 0.
In case you're wondering, the complete invocation, supplying the array [3,2,1]
, could be done in a number of ways, depending on your preference. Here are two examples:
Passing in the array as a string for jq
to consume via STDIN:
echo [3,2,1] | jq 'reduce .[] as $item (0; . + $item)'
Using jq
's -n
option, which tells jq
to use 'null' as the single input value (effectively saying "don't expect any input") and then using a literal array within the jq
code:
jq -n '[3,2,1] | reduce .[] as $item (0; . + $item)'
Regardless of how it is invoked, I wanted to work out which bit of the reduce
construct did what. I did so by relating the structure to what I was more familiar with - the reduce
function in JavaScript, which, if we were to do the equivalent of the above, would look like this:
[3,2,1].reduce((a, x) => a + x, 0)
So here we have:
reduce
[3,2,1]
reduce
function itself, with two things passed to it:
(a, x) => a + x
that implements how we want to reduce over the list, often referred to as the "callback" function0
for the accumulatorIf you want to understand each of these parts better, take a quick look at F3C Part 3 - Reduce basics.
When the line of JavaScript above is processed, the reduce
function first determines the initial value of the accumulator, which is 0
here (the second of the two parameters passed to it). Then it works through the array, calling the anonymous function for each item, and passing:
0
for the first item), received in parameter a
3
the first time, then 2
, then 1
), received in parameter x
Whatever that function produces (which in this case is the value of a + x
) becomes the new accumulator value, and the process continues with the next array item, and so on.
The final value of the accumulator is the final value of the reduction process (the reduce
function can produce any shape of data, not just scalar values, but that's an exploration for another time).
So how do we interpret the reduce
construct in jq
? Let's see, this is what we're looking at:
reduce .[] as $item (0; . + $item)
If we modify the $item
variable name so it's $x
, we can more easily pick out the component parts and relate them to what we've just seen:
reduce .[] as $x (0; . + $x)
Here we see:
.[] as $x
is the reference to the array we want to process with reduce
(remember, this will pass through whatever list is piped into this filter) and the the variable ($x
) that will be used to represent each array item as they're processed0
is the starting value for the accumulator. + $x
is the expression that is executed each time around (equivalent to a + x
in the JavaScript example), where the accumulator is passed in to it (i.e. the accumulator is the .
in the expression)And the final value of the . + $x
expression, i.e. the final value of the accumulator, is what then represents the output of this reduce
function invocation.
That's pretty much it!
I found this post from Roger Lipscombe useful for my understanding too: jq reduce.
Finally, you may also be interested in this live stream recording on what reduce
is and how it can be used to build related functions:
HandsOnSAPDev Ep.81 - Growing functions in JS from reduce() upwards
]]>CF on Kubernetes in Docker, on my laptop
As a developer with access to SAP's Business Technology Platform, I already have free access to multiple runtime environments and services. These environments include Kubernetes, via Kyma, and Cloud Foundry.
While exploring service brokers and service consumption on SAP Business Technology Platform recently, with a view to understanding the context and role of the SAP Service Manager (another open source project in the form of Peripli), I wanted to go beyond the developer level access I had to a Cloud Foundry runtime.
In essence, I wanted my own Cloud Foundry environment instance, that I controlled and administered. That way I would be able to explore the SAP Service Manager, and service catalogues and marketplaces in general, in more detail.
In the past, I've used PCF Dev, the classic go-to for getting a laptop-local Cloud Foundry up and running. I was successful in the past, and used it to explore some Cloud Foundry aspects. But this time, I fell at the first hurdle. One of the early steps to getting such a local version up and running is to install a plugin for the cf
CLI. This failed, basically due to the official location where the plugin was stored returning 404 NOT FOUND responses.
My searches on Stack Overflow (see e.g. this answer) and in various Slack channels had me coming to the conclusion that PCF Dev was, unfortunately, a non-starter today. Folks had tried in vain to compile and use the binary portion of the plugin, but had then hit issues further down the line. The fact that the official site sports a "End of availability" badge also helped confirm this.
There was hope, though, as I discovered a couple of initiatives that involved running CF on Kubernetes. That may strike one as odd at first sight, running one environment within another, but this is not really much different to my first environment, when I joined Esso as a graduate in 1987, where we ran various applications and systems, including SAP R/2, on the MVS/XA operating system ... which ran within the VM/CMS operating system.
And after all (cue generalisation and waving my arms about in the air) while CF is a developer-centric deployment and runtime platform, Kubernetes is a more generalised container orchestration system.
I proceeded to inhale as much information as I could on this topic, specifically about the two initiatives, which are:
Another aspect that had me scratching my head a little was the different ways I could run a local Kubernetes cluster. I considered and tried two possibilities, and both of them, independently of me trying to get CF running on them, worked well.
One is minikube, which uses a virtual machine manager. The other is kind, short for "Kubernetes in Docker". This adds yet another layer for my brain to get itself around (I'd essentially be running CF in Kubernetes in Docker, which itself -- on this macOS laptop -- is essentially a Linux virtual machine).
I started out on the KubeCF path, but had countless issues. Perhaps not surprising in the end, because while the main KubeCF landing page looks all shiny and up to date, when you dig down just one layer to the GitHub repository, you notice the ominous message "This repository has been archived by the owner. It is now read-only.". It wasn't an auspicious start, and I soon abandoned my attempts in that direction, and pivoted to cf-for-k8s.
There's a great article Getting Started with Cf-for-K8s which I found and followed. My laptop had comfortably more than the minimum hardware requirements (and as you'll see later, I think those minimum requirements are a little "light"). Kubernetes as a platform is complex, perhaps partly because it's built from different tools and projects. So the tool prerequisite list was a little longer than I'm used to seeing. That said, I had no issues, mostly thanks to brew
packaging:
kind
with brew install kind
(via this page) and the BOSH CLI with brew install cloudfoundry/tap/bosh-cli
.brew
to install ytt
and kapp
; in fact, these tools, along with others, are collected together in a package called Carvel, which "provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes", and I installed them all via the vmware-tanzu/carvel
tap.git
, the cf
CLI (I use version 7 these days).I also (re)installed k9s, a great Terminal User Interface (TUI) for monitoring and managing Kubernetes clusters. You can see k9s
in action in the screenshot at the start of this post.
I followed the Getting Started with Cf-for-K8s instructions in the to the letter, and was soon deploying cf-for-k8s with kapp
.
It wasn't entirely smooth sailing, but I managed to deal with the issues that came up. Here's what they were.
Once the kapp
based deployment was complete, I noticed that the cf-api-server
and cf-api-worker
pods were unstable. Sometimes they'd show the "Running" status, with all required instances running. Other times they'd switch to "CrashLoopBackoff". During this latter status, which was most of the time (due to the backoff algorithm, I guess), any cf
commands would fail with a message saying that the upstream was unhealthy.
Digging into the containers, the 'registry-buddy' seemed to be the component having problems, in both cases. It seemed that this component was involved in talking to the container registry, Docker Hub in my case. I eventually found this issue that described what I was seeing in my setup: container registry-buddy in cf-api-server and cf-api-worker pods always stop.
The user nicko170 made an extremely useful comment on this issue suggesting that it was a timeout issue and also providing a fix, which involved adding explicit timeoutSeconds
values to the configuration.
I implemented this exact fix on my fork of the capi-k8s-release repo, and then modified the entry in my vendir.yml
file to point to my own modified fork:
---
apiVersion: vendir.k14s.io/v1alpha1
kind: Config
minimumRequiredVersion: 0.11.0
directories:
- path: config/capi/_ytt_lib/capi-k8s-release
contents:
- path: .
git:
url: https://github.com/qmacro/capi-k8s-release
ref: 9ec99f41bded21a6fbe496323dbcb225d927b158
I did see a vendir.lock.yml
file while making this fix; I'm not sure where it came from, but assuming it was similar to the package-lock.json
file mechanism in Node.js, and had similar effects, I removed it, to give the modification to vendir.yml
the best chance of taking hold.
This fixed the stability of the cf-api-server
and cf-api-worker
pods. Great, thanks Nick!
This didn't happen on my first attempt (where I started to notice the cf-api-server
and cf-api-worker
issues), but it did on my subsequent attempts, when I'd removed the cluster (with kind delete cluster
) and started again. I got timeout issues relating to the pod that seemed to be responsible for some ccdb migration activities (whatever they are).
Digging around it seemed to be possibly related to there not being enough resources allocated to the cluster for it to complete the tasks assigned. So I modified my Docker Engine configuration to increase the RAM and CPU resources as follows:
After starting again, this seemed to fix the timeout issue with this pod.
Once I'd got that out of the way, I noticed that while cf-api-worker
was now stable, cf-api-server
was still having issues. Looking into the logs, it wasn't the registry-buddy
container that was in trouble this time, it was the cf-api-server
container itself.
There were logs that ostensibly looked like they were from a Ruby app, and expressed complaints about some missing configuration in the rate_limiter
area. I did a quick search within my installation directory and found this to be in a single configuration file:
config/capi/_ytt_lib/capi-k8s-release/config/ccng-config.lib.yml
The pertinent section looked like this:
rate_limiter:
enabled: false
per_process_general_limit: 2000
global_general_limit: 2000
per_process_unauthenticated_limit: 100
global_unauthenticated_limit: 1000
reset_interval_in_minutes: 60
The configuration entries that were apparently missing, according to the log, were general_limit
and unauthenticated_limit
. Looking around, I found these properties on the Setting the Rate Limit for the Cloud Controller API page in the CF documentation. So they did seem to be valid properties.
It was getting late so I just modified that ccng-config.lib.yml
file to add the two missing properties, as copies of the global_
versions that were already there.
I then rebuilt the deployment configuration and deployed once more. This basically involved just running this invocation again:
kapp deploy -a cf -f <(ytt -f config -f ${TMP_DIR}/cf-values.yml)
Note that the deployment configuration rebuild is done by the ytt
invocation, which is inside the <( ... )
process substitution part of the invocation.
This seemed to satisfy the cf-api-server
into not complaining any more.
At this point, everything became eventually stable. On my laptop, this was taking between 5 and 10 minutes. It remained stable and I was able to authenticate with cf
, create an organisation and space, and deploy (via cf push
) a simple test app.
That was quite a journey, and I've learned a lot along the way. Now comes the real learning, at the Cloud Foundry administrative level!
]]>Exercism is a great resource for learning and practising languages. I've dabbled in a couple of tracks and it's a fun and compelling way to iterate and meditate on constructs in the languages you're interested in. One of the very appealing things to me is that as well as a very capable online editor environment, there's a command line interface (CLI) for working locally.
I've recently been digging into jq and wanting to build my knowledge out beyond the classic one-liners one might normally express in a JSON processing pipeline situation. jq
is a complete language, with a functional flavour and there's support for modules, function definitions and more. The manual felt pretty terse at first, but after a while my brain got used to it.
I thought it might be an interesting exercise to see how a jq
track might work with Exercism; initially I just want to perhaps use some of the existing tests to code against, where I provide jq
scripts to compute the right answers.
As jq
is "just another Unix tool" that works well on the command line, it seemed logical to try and start with something similar, which I did - the bash
track. Here's what I did to feel my way into this journey. It's early days, and this blog post is more of a reminder to my future self what I did.
Having set myself up for working locally I downloaded a simple exercise from the Bash track - Reverse String, and moved it to a new, local jq
track directory:
# /home/user
; cd work/Exercism/
# /home/user/work/Exercism
; ls
./ ../ bash/
# /home/user/work/Exercism
; mkdir jq
# /home/user/work/Exercism
; exercism download --exercise=reverse-string --track=bash
Downloaded to
/home/user/work/Exercism/bash/reverse-string
# /home/user/work/Exercism
; mv bash/reverse-string jq/
# /home/user/work/Exercism
; cd jq/reverse-string/
# /home/user/work/Exercism/jq/reverse-string/
; ls
./ ../ .exercism/ HELP.md README.md bats-extra.bash reverse_string.bats reverse_string.sh
# /home/user/work/Exercism/jq/reverse-string
;
The bash
track uses the Bash Automated Testing System, known as bats
, for unit testing. The tests are in the reverse_string.bats
file and look like this (just the first two tests are shown here):
#!/usr/bin/env bats
load bats-extra
# local version: 1.2.0.1
@test "an empty string" {
#[[ $BATS_RUN_SKIPPED == "true" ]] || skip
run bash reverse_string.sh ""
assert_success
assert_output ""
}
@test "a word" {
[[ $BATS_RUN_SKIPPED == "true" ]] || skip
run bash reverse_string.sh "robot"
assert_success
assert_output "tobor"
}
I modified each test line (run bash <sometest>.sh <test input>
) to reflect a more jq
oriented invocation, which looks like this:
run jq -rR -f <sometest>.jq <<< <test input>
This:
jq
instead of bash
-r
flag to tell jq
to output raw strings, rather than JSON texts (this means that the value banana
would be output as is, rather than "banana"
with double quotes; a double-quoted string is valid JSON and jq
strives to output valid JSON by default)-R
flag to tell jq
to expect raw strings, rather than JSON input-f
flag to point to a file containing the actual jq
script (called a "filter")jq
expects the input via STDIN (so far, the <test input>
values have been scalar values)This is what the above excerpted unit test file now looks like:
#!/usr/bin/env bats
load bats-extra
# local version: 1.2.0.1
@test "an empty string" {
#[[ $BATS_RUN_SKIPPED == "true" ]] || skip
run jq -rR -f reverse_string.jq <<< ""
assert_success
assert_output ""
}
@test "a word" {
[[ $BATS_RUN_SKIPPED == "true" ]] || skip
run jq -rR -f reverse_string.jq <<< "robot"
assert_success
assert_output "tobor"
}
The solution file supplied by default here is reverse_string.sh
and contains some hints as to how to structure the contents. Basically, the file has to be written in such a way that when it's invoked, with the input supplied, it outputs the expected answer.
So here, I created reverse_string.jq
to be used instead of the default reverse_string.sh
. Having deliberately chosen a simple exercise, here's what my solution looks like in this file:
#!/usr/bin/env jq
split("") | reverse | join("")
I'm a big fan of entr and used it here to rerun the unit tests every time I changed either them or my solution file reverse_string.jq
, like this:
# /home/user/work/Exercism/jq/reverse-string
; ls *.bats *.jq | entr -c bats reverse_string.bats
This provided me with a lovely unit test result that would automatically update if I modified the solution or even the unit test file itself:
ā an empty string
- a word (skipped)
- a capitalised word (skipped)
- a sentence with punctuation (skipped)
- a palindrome (skipped)
- an even-sized word (skipped)
- avoid globbing (skipped)
7 tests, 0 failures, 6 skipped
As you can see from the unit test results, only one test ("an empty string") was executed. The others are skipped. This is by design - see the Skipped tests section of the test documentation.
Activating the further tests is just a matter of commenting out the [[ $BATS_RUN_SKIPPED == "true" ]] || skip
line - note that the first test in the file has this line commented out by default so just that first test is run initially.
Alternatively, as you can see from that line, the BATS_RUN_SKIPPED
environment variable can be set to true
instead, and all of the tests will be run, like this:
# /home/user/work/Exercism/jq/reverse-string
; BATS_RUN_SKIPPED=true bats reverse_string.bats
ā an empty string
ā a word
ā a capitalised word
ā a sentence with punctuation
ā a palindrome
ā an even-sized word
ā avoid globbing
7 tests, 0 failures
Looks like that jq
filter passes all the tests š
Anyway, that's as far as I got - I think there could be some mileage in pursuing this approach further. Now it's time for me to use this technique to help me dig into writing a jq
filter to solve the Scrabble Score exercise!
There's a great discussion going on right now over on the SAP Community in the following thread:
New or not to the SAP Community, share your story!
Craig kicks things off talking about the community and asks folks for their story - how they came to be involved. I replied to the thread sharing my story, and I've also reproduced it here.
My involvement with the SAP community goes way back to 1995 where I started a mailing list for SAP practitioners around the world. I ran that mailing list for a year or two, and it was hard work; performing administration and maintenance tasks each evening, from my laptop in hotel rooms while I travelled around to different SAP customers while working as a consultant.
Why a mailing list? Well, the Web was still very new and very few folks had access to it, and to be honest, mailing lists were the vehicle for communities back then. You can read more about that mailing, and how it subsequently transformed into SAP-R3-L, in these posts: The SAP developer community 10 years on (note that this post is under my previous ID 'dj.adams') and Monday morning thoughts: community engagement.
Fast forward from there to 2002 where, having just written a book for O'Reilly, I got directly involved in a project that was staffed by folks from SAP and O'Reilly. The project was to conceive and bring to life a public community website for SAP folks - customers, partners and individual consultants and contractors. Over the next few months we worked on design, content areas and so on, and went through a couple of platform iterations.
Finally, in early 2003 we were almost ready to launch. But we needed content, so I, along with some others, also got involved in writing content for the new site, to publish over the weeks and months after go-live.
The site was launched in May 2003 and I wrote the first external blog post. I also then started to seed the site areas with content on various subjects. As you may have guessed, this site was launched as the SAP Developer Network (SDN), later renamed SAP Community Network and now SAP Community.
So that's my story.
]]>Like with any break, short or long, you're away from home chores and absent from work. So there's generally less to do. And from a narrowboat perspective, while one hears about the change of pace, how everything happens more slowly, for me it's the side effect of these aspects that really resonates. The side effect is that there's more time for everything. And in the context I've described, that everything is not much at all.
The upshot of this is that I have more time to think, or to let my mind wander. I can take my time and enjoy the ceremony, the precision, of making the perfect cup of coffee. I can remind myself of what the sky looks like, how the daylight changes over time. I can contemplate the essentials that are otherwise lost to me in the regular daily blur of noise and activity - staying warm and dry, eating, resting and allowing my intangible core to catch up with the rest of me.
Both previous breaks on Queenie have been cruising breaks, but I had the opportunity to book a three night static winter stay on her, which happened this weekend.
I'd booked in late autumn last year, and my busy schedule hadn't allowed me to think much more of it until the days leading up to that long weekend. Working from home by default, but with life in general and the pandemic twist in particular adding an extra layer of stress and complexity, I hadn't realised how much I needed to press the pause button.
So it was in this state of mind that I headed down to Anderton Marina on a Friday afternoon in late January, to be greeted by Hester. Queenie was already warm, the fire was lit and inviting me in from the cold evening. The silence inside the boat was loud and the perfect companion to the glow of the coals.Thus began an immediate expanse of time, slow time, that enveloped me as I stepped in from the stern. A chance to allow the momentum of life's juggernaut to fade slightly as I caught my breath. A chance to stop thinking about work, about what my current life situation is throwing at me, and to dive into simplicity. Reading articles in magazines I'd never consider having time for, learning some new technique in a data manipulation language that I didn't think I could allow myself the mental space to investigate and enjoy, deliberating everything and also nothing.
There's a wonderful streak of adventure and discovery that runs through a narrowboat holiday on the canals. That is clear. But there's also an equally wonderful appeal of a few days and nights of static hire. It's like having a small but perfectly formed boutique hotel all to yourself, with the added bonus of endless water around you, and endless sky above you.
It's a calm, floating context that provides the perfect environment where you can press the pause button. Give it a try.
Queenie is a 50ft/15.45m narrowboat based at Anderton Marina, Uplands Road, Anderton, Northwich, Cheshire, CW9 6AJ. The boat has a Canaline engine and is a great canal boat to handle. Queenie has a cruiser stern (lots of room on the back deck). The hull was manufactured by Nick Thorpe, Staffordshire.
Originally published on the Star Narrowboats Holidays website.
]]>psFormat
Docker configuration option recently, and it got me thinking about how I strive for neat terminal output.
I'm unashamedly a fan of the terminal, of the command line experience. The hashtag #TheFutureIsTerminal is one I'm fond of using.
I like things neat and orderly, and this does not include output from commands where each line is wrapped beyond the width of the current terminal I'm in. It's not the end of the world but it does make things more difficult to read.
Here are a couple of examples. The first is the default that you get from a docker ps
invocation (I actually prefer the equivalent docker container ls
command, but that's a story for another time):
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAM
ES
8d04bcad0acb myoung34/github-runner:2.285.1 "/entrypoint.sh ./biā¦" 3 days
ago Up 3 days git
hub-runner
6794404db2c5 linuxserver/heimdall:latest "/init" 5 month
s ago Up 2 weeks 0.0.0.0:1080->80/tcp, 0.0.0.0:1443->443/tcp hei
mdall
2209e4f7b60b jkaberg/weechat "/run.sh '--dir /weeā¦" 5 month
s ago Up 2 weeks 9001/tcp wee
chat
faba27c91916 mvance/unbound:latest "/unbound.sh" 5 month
s ago Up 2 weeks (healthy) 0.0.0.0:5335->53/tcp, 0.0.0.0:5335->53/udp unb
ound
8f9406be8de1 pihole/pihole:latest "/s6-init" 5 month
s ago Up 2 weeks (healthy) pih
ole
1bae70113281 linuxserver/freshrss:latest "/init" 7 month
s ago Up 2 weeks 0.0.0.0:8002->80/tcp, 0.0.0.0:9002->443/tcp fre
shrss
The long output lines are wrapped in the terminal; here, the simulation is for 80 characters, arguably the default and de facto standard width.
In case you're curious, I "simulated" this wrapping for including the output in the Markdown source of this post, using the
fold
command, like this:docker container ls | fold
.
The second example is from the SAP Business Technology Platform (BTP) CLI. Most
of the commands available with this CLI output lines longer than 80 characters
too; here's the output from btp get accounts/global-account --show-hierarchy
:
OK
Showing details for global account fdce9323-d6e6-42e6-8df0-5e501c90a2be...
āā zfe7efd4trial (fdce9323-d6e6-42e6-8df0-5e501c90a2be - global account)
ā āā trial (z78e0bdb-c97c-4cbc-bb06-526695f44551 - subaccount)
type: id: display name: parent i
d: parent type: directory features: region:
subdomain: state: state message:
global account zdce9323-d6e6-42e6-8df0-5e501c90a2be zfe7efd4trial
zfe7efd4trial-ga OK Unsuspension account completed
subaccount z78e0bdb-c97c-4cbc-bb06-526695f44551 trial zdce9323
-d6e6-42e6-8df0-5e501c90a2be global account eu10
zfe7efd4trial OK Subaccount created.
I don't know about you, but neither of these outputs can be easily and quickly undersood.
Here are three approaches I'm using to make this situation better.
Embracing the Unix philosophy I created a very simple script trunc which is effectively a call to the cut
command, like this:
cut -c 1-"$(tput cols)"
The terminal width is determined using tput cols
and then passed as a value to cut
. The -c
option specifies what should be cut, based on a range of character positions, from 1 to however many columns there are in the terminal.
As a script, trunc
can thus be used to tidy up the output of anything. Here's the effect of calling docker ps | trunc
in an 80 column terminal:
CONTAINER ID IMAGE COMMAND CREATED
8d04bcad0acb myoung34/github-runner:2.285.1 "/entrypoint.sh ./biā¦" 3 days
6794404db2c5 linuxserver/heimdall:latest "/init" 5 month
2209e4f7b60b jkaberg/weechat "/run.sh '--dir /weeā¦" 5 month
faba27c91916 mvance/unbound:latest "/unbound.sh" 5 month
8f9406be8de1 pihole/pihole:latest "/s6-init" 5 month
1bae70113281 linuxserver/freshrss:latest "/init" 7 month
And here's what the output of btp get accounts/global-account --show-hierarchy | trunc
looks like:
OK
Showing details for global account fdce9323-d6e6-42e6-8df0-5e501c90a2be...
āā zfe7efd4trial (zdce9323-d6e6-42e6-8df0-5e501c90a2be - global account)
ā āā trial (z78e0bdb-c97c-4cbc-bb06-526695f44551 - subaccount)
type: id: display name: parent i
global account zdce9323-d6e6-42e6-8df0-5e501c90a2be zfe7efd4trial
subaccount z78e0bdb-c97c-4cbc-bb06-526695f44551 trial zdce9323
While I don't get all the information, the information that I'm likely to need is there, and it's so much easier to read. So much so, in fact, that I've enveloped all calls to the btp CLI so that I can apply trunc
to output from get
and list
commands automatically:
btp () {
if [[ $1 =~ ^(get|list)$ ]]; then
btpwrapper "$@" | trunc
else
"$HOME/bin/btp" "$@"
fi
}
The
btpwrapper
is another function in that library of btp CLI related functions that tries to deal with the unwanted "OK" and empty line output when each command completes. But that's a story for another time.
So now, with this btp
function, I can just execute get
and list
actions with the btp CLI and the output is automatically truncated to the width of my current terminal. Which helps a lot.
Watching a recording of the presentation "Tricks of the Captains" by Adrian Mouat at DockerCon 2017 I learned about the psFormat
configuration option. The output of Docker CLI commands can be formatted with Go templates. There's a useful set of examples in the Format command and log output topic in the Docker documentation.
These Go templates can be specified on the command line directly with the docker
invocation, using the --format
option like this:
docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}"
This will cause the output to be formatted according to the template specified; the above command will produce something like this:
CONTAINER ID IMAGE STATUS NAMES
8d04bcad0acb myoung34/github-runner:2.285.1 Up 3 days github-runner
6794404db2c5 linuxserver/heimdall:latest Up 2 weeks heimdall
2209e4f7b60b jkaberg/weechat Up 2 weeks weechat
faba27c91916 mvance/unbound:latest Up 2 weeks (healthy) unbound
8f9406be8de1 pihole/pihole:latest Up 2 weeks (healthy) pihole
1bae70113281 linuxserver/freshrss:latest Up 2 weeks freshrss
So much nicer! I only really need to see information for these columns (the ID, image name, status and container name(s)), so that's what I specify in the template.
It's a bit of a pain having to remember to specify this --format
option each time, with the template. Of course, I could create a function that did this for me but the Docker CLI way is to specify the template in a configuration property.
The Docker CLI configuration file (for example in $HOME/.docker/config.json
) is where to put this:
{
"psFormat": "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}"
}
Now the output will be formatted according to that template automatically. Nice!
I'm sure there are other techniques and approaches for making output appear more readable. I'd love to hear of more - let me know in the comments.
]]>In analysing various GitHub issues and pull requests recently, I needed to be able to open up a number of them in my browser, one in each tab. The GitHub issue and pull request URLs are determined from a script, and I wanted to be able to open up a new Chrome window on the screen with all of the URLs loaded.
I came across the excellent chrome-cli tool a while back but didn't have a pressing need to use it at the time. You can control Chrome and its derivatives from the command line; the help output gives you an idea of what's possible - here's an excerpt from it:
chrome-cli list windows (List all windows)
chrome-cli list tabs (List all tabs)
chrome-cli list tabs -w <id> (List tabs in specific window)
chrome-cli list links (List all tabs' link)
chrome-cli list links -w <id> (List tabs' link in specific window)
chrome-cli info (Print info for active tab)
chrome-cli info -t <id> (Print info for specific tab)
chrome-cli open <url> (Open url in new tab)
chrome-cli open <url> -n (Open url in new window)
chrome-cli open <url> -i (Open url in new incognito window)
chrome-cli open <url> -t <id> (Open url in specific tab)
chrome-cli open <url> -w <id> (Open url in new tab in specific window)
chrome-cli close (Close active tab)
chrome-cli close -w (Close active window)
chrome-cli close -t <id> (Close specific tab)
chrome-cli close -w <id> (Close specific window)
...
Anyway, this GitHub issue and pull request analysis was the perfect opportunity to try it out for real. The analysis script I have spits out GitHub issue and pull request URLs based on a filter, so I wrote a script to take these URLs, one per line, and open them up in tabs in a new Chrome window.
I specifically wanted a new Chrome window, rather than have the tabs open in any existing window, and I had to do a bit of jiggery pokery to get the desired effect - you'll see this in the script (if anyone has a better suggestion please let me know).
Here's the script I wrote, quick and dirty, but it works, and sometimes, like this time, it's all that's needed.
#!/usr/bin/env bash
set -o errexit
# Create a new Chrome window, then read in a list of URLs and open each
# one in a new tab in that Chrome window. Quick and dirty, but it works.
#
# To get a new window and its ID - open an empty placeholder URL in a
# new window, this returns the ID of the new tab (not window); the
# window ID is one less than the ID of the new tab (potentially brittle,
# but meh).
declare windowid tabid url
# Open new window with placeholder tab
tabid="$(chrome-cli open about:blank -n | awk '/^Id:/ { print $2 }')"
windowid="$((tabid - 1))"
# Open URLs in new tabs of that new window
while read -r url; do
chrome-cli open "$url" -w "$windowid"
done
# Close the tab containing the original placeholder URL
chrome-cli close -t "$tabid"
I open the about:blank
page in a new Chrome window (chrome-cli open about:blank -n
) - this is to bring about the creation of the new window itself. What's returned from this however is the ID of the new tab, which is one more than the ID of the new window. Once I've opened up the URLs, I can then close the about:blank
tab as I don't need it.
Here it is in action (with some test URLs):
And that's about it. Definitely worth giving chrome-cli a whirl!
]]>I'm writing more Dockerfiles, not least because I'm using a development container for 95% of my daily work, but also because the dockerisation of tools and environments appeals to me greatly. I came across hadolint which is a Dockerfile linter written in Haskell (hence the name, I guess).
I'm a big fan of shellcheck (see the post Improving my shell scripting) and the structured way it communicates the information, warning and error messages with codes in a standard format (SCnnnn). So I was immediately attracted to hadolint
in two ways - first, that it referenced shellcheck, but mostly because it implemented and managed its own rules in a very similar way - each of them with a code in a standard format (DLnnnn) and individually documented too, just like shellcheck
.
There are different points in your workflow that you can integrate such a tool - these are nicely described in a dedicated integration page. I wanted to have the linting happen in my editor, and am already using the Asynchronous Linting Engine so it was quite straightforward. Here's what I did:
I installed hadolint
with Homebrew on my macOS host, and by pulling down the latest binary in the Dockerfile for my development container. It's a single executable, which is quite neat. I may look into using hadolint
as a Docker image instead, although I didn't at this stage because of various reasons (mostly involving a recently introduced security policy on this work laptop that automatically stops the SSH daemon, rendering the secure remote access to my Docker engine useless. But that's a story for another time).
I already use various tools for linting my content - shellcheck
, yamllint
and markdownlint, and have configuration set up for that, so I just added hadolint
to the list, which now looks like this:
let g:ale_linters = {
\ 'sh': ['shellcheck', 'language_server'],
\ 'yaml': ['yamllint'],
\ 'markdown': ['markdownlint'],
\ 'dockerfile': ['hadolint'],
\ }
Because I sometimes create Dockerfiles with different names, I also added a new section to my Vim configuration telling it that these files are also to be treated as Dockerfiles:
augroup filetypes
au!
autocmd BufNewFile,BufRead Dockerfile* set filetype=dockerfile
augroup END
Now I get lovely warnings and errors in the left hand column so that I can improve:
In case you're wondering, the message details are shown at the bottom of my editor when I select the lines, and they are (in order):
All very helpful - thanks hadolint
!
In part 1 I took a first look at fff
, "a simple file manager written in Bash", focusing on the main
function, and learned a lot. In this part I take a look at the first function called from main
, and that is get_ls_colors
. I'm continuing to use the same commit reference as last time, i.e. the state of fff
here.
Here's the context of the call to get_ls_colors
from main
:
((${FFF_LS_COLORS:=1} == 1)) &&
get_ls_colors
Now it's fairly obvious that this has something to do with how the fff
display is coloured, and we get some extra clue about this from the man page content, in the customisation section:
# Use LS_COLORS to color fff.
# (On by default if available)
# (Ignores FFF_COL1)
export FFF_LS_COLORS=1
First, what is LS_COLORS
? Well it's an environment variable that controls colourisation for the output of ls
- so different types of files can be shown in different colours. And it looks like fff
can use the configuration in LS_COLORS
.
So far so good, but there's also some fallback colour mechanism that we can see in the Customisation section of the main README too. I didn't quite grok the comment "On by default if available" but it came clear once I'd looked into LS_COLORS
and remembered the assignment type of parameter expansion that we see above. In other words, ${FFF_LS_COLORS:=1}
above takes care of the "On by default" part.
What about the "if available" part though?
My operating system preference is Linux, but I've not had the chance to use it for work for a long time; my current work machine OS is macOS. While that goes quite far in giving me the *nix environment I feel most at home in, its heritage is the BSD flavour, which I'm not as used to.
One of the differences which is very relevant here is how the colours for ls
output are controlled. I'd been looking for the existence of LS_COLORS
in my shell prompt on my main macOS machine, but hadn't found it. What I had found was an environment variable CLICOLOR
which was set to true
, and there was a -G
option for ls
to turn on colours, instead of the --color=auto
that I've seen before. And confusingly, I'd seen reference to LSCOLORS
(not LS_COLORS
).
I hadn't really paid much attention until now because the output of ls
in my macOS terminal is coloured already; this is because having the CLICOLOR
environment variable set has the same effect as the -G
option, i.e. to turn on colours.
Moreover, the colours in this context can be customised using values in another environment variable LSCOLORS
.
That's all well and good, but since I've started using dev containers in earnest, I have remote, portable, consistent and reconnectable access to my ideal working environment (I run most of my containers on my Synology NAS). So I'm back to a Linux flavoured *nix environment, which is wonderful.
But this means that I'm now using a non-BSD ls
, which means that CLICOLOR
isn't applicable, nor is LSCOLORS
. There's been a whole host of articles, posts and Stack Overflow Q&A entries written on this, so I won't add to it. Suffice it to say that fff
respects the GNU ls
, which means LS_COLORS
is relevant ... and not LSCOLORS
. While both these environment variables are used to customise the colours, the format of how the colour selections are specified are wildly different. Shortly, we'll see the LS_COLORS
format, and how it's processed in get_ls_colors
.
Regarding the LS_COLORS
environment variable, I read a few posts online to learn more about this. One that I found helpful is Configuring LS_COLORS. This one also introduced me to the related dircolors
command. And looking at the example value for LS_COLORS
, it's clear that it's quite a complex combination of specifications.
Anyway, let's get back to the script.
Rather than look directly on the Web at what the LS_COLORS
specifications are, let's first see if we can get a general feeling for them from reading the code here. Actually we get our first clue from the comment that describes the function as a whole:
get_ls_colors() {
# Parse the LS_COLORS variable and declare each file type
# as a separate variable.
# Format: ':.ext=0;0:*.jpg=0;0;0:*png=0;0;0;0:'
The value looks like a series of :
separated pairs of file patterns and colour specifications.
The first bit of code is there so get_ls_colors
can be aborted if there's nothing to parse:
[[ -z $LS_COLORS ]] && {
FFF_LS_COLORS=0
return
}
Here we have the classic conditional expression that we saw in part 1; here, the -z
unary expression evaluates to true if the length of the given string -- in this case the value of the LS_COLORS
variable -- is zero. If it is, there's no point in trying to parse anything, and the variable that keeps track of whether colours should be shown (FFF_LS_COLORS
) is set to 0
before the function is ended early.
In my journey through Bash scripting so far, seeing the return
statement like this is still quite unusual, but it makes a lot of sense here. It can take an optional integer argument which is returned to the caller as the exit status.
Next comes a lovely line, with a comment:
# Turn $LS_COLORS into an array.
IFS=: read -ra ls_cols <<< "$LS_COLORS"
Don't confuse
=:
with any sort of assignment operator you might have seen elsewhere (such as:=
in Go or Mathematica) - it's just the assignment (=
) of a colon (:
) toIFS
.
We saw one use of read
in part 1 but that was more about how the read flags were constructed and used. Here we have another use of read
, arguably a very common one, i.e. in combination with a temporary setting of a value for the IFS
environment variable. By temporary, I mean that the assignment holds just for the rest of that same line only.
Let's break it down: we have the explicit setting of IFS
, a read
statement, which is being given the value of the LS_COLORS
variable as its input, via the rather splendid looking <<<
.
So first, what's IFS
? Well, it stands for "input field separator", or "internal field separator". The best overview I've found is on the Bash Wiki where it describes IFS
as "a string of special characters which are to be treated as delimiters between words/fields when splitting a line of input". The default value for IFS
consists of three different whitespace characters:
And if IFS
is unset (i.e. "has no value", which is different from "has a value that is empty") then the effect is as if it were to contain these three characters.
What's the splitting that's going on here, then? Well that's in the context of the read
command.
The read
command is a builtin, the description of which is "Read a line from the standard input and split it into fields". We'll get to what's being read shortly, but at least we now understand the context of the splitting. Moreover, the read invocation here is with a couple of options:
-r
do not allow backslashes to escape any characters-a array
assign the words read to sequential indices of the array variable, starting at zeroThe first option is very commonly seen with read
and in fact if you don't specify it in your script, shellcheck will point it out with message 2162 read without -r will mangle backslashes. It's rare that you're going to want to have backslashes in your input to be treated as escape characters, but that's what read
will do, unless you supply the -r
option.
The second option means that the fields that result from splitting will be placed in an array. Without an array, you might do something like this:
; read -r first second <<< "hello world"
; echo $first
hello
; echo $second
world
Using an array is sometimes more helpful, from a dynamic perspective:
; read -ra words <<< "hello world"
; echo ${words[0]}
hello
; echo ${words[1]}
world
In part 1 I took a brief look at output redirection. Now it's the time to look at input redirection.
I see the use of the input redirection symbol (<
), and how it "grows" (to <<
and even <<<
) as the input "shrinks", in the same way as I see the first part of vehicle licence plates in Germany.
When I was over there, my car had the licence plate KR DJ 400
. The first part of a licence plate, before the space, reflects the place the vehicle was registered. For large cities and towns, there's a single letter, for example D represents DĆ¼sseldorf. For medium sized places there are two letters, for example KR for Krefeld. And for small places there are three letters, for example WOB for Wolfsburg.
What do I mean about the input shrinking? Well to me, a file is "large", some in-line data is "smaller", and a string is "smaller still". Let's have a look at each one in turn. The syntax, examples are taken verbatim from the GNU Bash Reference Manual's redirections section and contain extra syntax (such as the [n]
below) but you can ignore that for now.
<
)[n]<word
From the GNU Bash Reference Manual: "The file whose name results from the expansion of word to be opened for reading on file descriptor n, or the standard input (file descriptor 0) if n is not specified.". In other words, input is taken from the file word
.
<<
)[n]<<[-]word
here-document
delimiter
A slightly strange name, this is a "here document". I think of this as "the input is right here!", rather than in a file. So, arguably "smaller" than a file (to follow the German licence place parallel). To quote the GNU Bash Reference Manual, "input from the current source [is taken] until a line containing only word (with no trailing blanks) is seen.". In other words, word
here is not a filename, but a delimiter. The delimiter EOF
is often seen.
I do find that this example from the reference manual is a little confusing as word
is not the same as delimiter
(which it would be in reality), and the indentation you see relates to the <<-
version which you can read up on in that section.
<<<
)[n]<<< word
Smaller still is the "here string", the younger sibling of the "here document". This time, word
is not a filename, nor is it a delimiter. It's actually the input.
To quote the GNU Bash Reference Manual again, "the word undergoes tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal. Filename expansion and word splitting are not performed. The result is supplied as a single string, with a newline appended, to the command on its standard input (or file descriptor n if n is specified)".
Sensibly, in the case of here documents, none of the expansions happen, of course. Now hopefully the "hello world" example earlier makes sense.
Anyway, where in the fff
script were we? Ah yes, at this line:
# Turn $LS_COLORS into an array.
IFS=: read -ra ls_cols <<< "$LS_COLORS"
So we now know that via a here string construction, the value of the LS_COLORS
variable is the input that is read (into the ls_cols
array).
Now we understand that, the final thing to think about is how the IFS
value of :
comes into play here. In the "hello world" example earlier, the default whitespace value(s) in IFS
meant that "hello world" was split into "hello" and "world". To understand why IFS
is being set to :
, we need to know what a typical value for LS_COLORS
looks like.
The comment we came across earlier gives us a nice small example:
# Format: ':.ext=0;0:*.jpg=0;0;0:*png=0;0;0;0:'
Let's manually set the value of LS_COLORS
to the example value here, execute the line with the read
command, and then look at what we get in ls_cols
:
; LS_COLORS=':.ext=0;0:*.jpg=0;0;0:*png=0;0;0;0:'
; IFS=: read -ra ls_cols <<< "$LS_COLORS"
; echo "${ls_cols[@]}"
.ext=0;0 *.jpg=0;0;0 *png=0;0;0;0
OK, sort of as expected. But what's that space character right at the start of the line, just before the .ext=0;0
? We can see it more clearly by asking for the values to be printed on separate lines, like this:
; printf "%s\n" "${ls_cols[@]}"
.ext=0;0
*.jpg=0;0;0
*png=0;0;0;0
Because the value of LS_COLORS
starts with a colon, there's an empty value that gets put into the first slot of the array.
But this empty value doesn't seem to matter, as the rest of the get_ls_colors
function is looking for specific patterns anyway. So let's start looking at that next.
Next up we have something similar to what we saw in part 1 - a C-style for loop. This time it's not infinite:
for ((i=0;i<${#ls_cols[@]};i++)); {
# Separate patterns from file types.
[[ ${ls_cols[i]} =~ ^\*[^\.] ]] &&
ls_patterns+="${ls_cols[i]/=*}|"
# Prepend 'ls_' to all LS_COLORS items
# if they aren't types of files (symbolic links, block files etc.)
[[ ${ls_cols[i]} =~ ^(\*|\.) ]] && {
ls_cols[i]=${ls_cols[i]#\*}
ls_cols[i]=ls_${ls_cols[i]#.}
}
}
The loop control is based on iterating through the indices of the ls_cols
array. Within the loop there are two actions that can be carried out, each dependent on a particular condition. Let's look at them one at a time, helped by what we see in the comments.
Not having looked too hard at the LS_COLORS
specification, I wasn't exactly sure what this first condition/action was, what a "pattern" was, specifically. I had a rough idea of course, but things became clearer by looking at the detail of the condition:
[[ ${ls_cols[i]} =~ ^\*[^\.] ]]
This is another conditional expression, this time using the binary operator =~
which allows for the use of a POSIX extended regular expression for matching (more information is available on the Conditional Constructs page of the GNU Bash Reference Manual).
Each of the items in ls_cols
(via the i
iterator) is tested according to the regular expression ^\*[^\.]
which breaks down like this:
Pattern | Matches |
---|---|
^ |
Anchors to the start of the string |
\* |
An actual asterisk character |
[^\.] |
Any character except an actual period (. ) |
Out of the values we see in the example LS_COLORS
above, only this one matches:
*png=0;0;0;0
It looks like these "patterns" are different from "file types" in that it's not about the file extension (which would be introduced by a period). I'm still not sure what this distinction holds, but anyway, I'm going to keep going.
If this conditional expression is true, then what happens? Well, this line gets executed, and it's another beauty:
ls_patterns+="${ls_cols[i]/=*}|"
Let's start with ls_patterns
. This is the first time this variable name appears. No previous declarations, no nothing. Is that a good thing? I'm not sure, but I do defer to Dylan's superior skill, style and experience here. It does turn out that, according to the Advanced Bash Scripting Guide, specifically section 4.3. Bash Variables Are Untyped, "Bash variables are character strings". That is, unless they're explicitly declared to be something else such as integers or arrays. So here ls_patterns
is a string, and it starts out having no value.
That brief excursion helps us contextualise the +=
assignment operator which is covered in the Shell Parameters section of the GNU Bash Reference Manual. Unless the variable is an integer or an array, this assignment operator does what we expect it to do, i.e. appends the value on the right hand side to any existing value already in the left hand side. Seeing the |
at the end of the string on the right hand side, here:
"${ls_cols[i]/=*}|"
gives us a hint that it's going to be a pipe (|
) separated string of those patterns that were matched.
But not exactly those patterns. Notice the /=*
just after the ls_cols[i]
. This is actually a short version of this string replacement form of shell parameter expansion:
${parameter/pattern/string}
Specifically, what we're seeing is this rule in play: "If string is null, matches of pattern are deleted and the / following pattern may be omitted.".
So /=*
will cause anything starting with (and including) an equals sign to be removed from the value. Looking again at the LS_COLORS
item matched above:
*png=0;0;0;0
this would remove the =0;0;0;0
part, leaving just *png
to be appended to ls_patterns
, plus the |
as the separator, i.e. *png|
.
This is not a mutating replacement; the value of the current item remains what it was.
So that's the "collecting patterns" part of this loop. What else is there?
The other part within the loop is similar; it is also introduced by a conditional expression using the =~
operator, and the entire thing also takes the form [[ condition ]] && action
as we've seen in multiple places already:
# Prepend 'ls_' to all LS_COLORS items
# if they aren't types of files (symbolic links, block files etc.)
[[ ${ls_cols[i]} =~ ^(\*|\.) ]] && {
ls_cols[i]=${ls_cols[i]#\*}
ls_cols[i]=ls_${ls_cols[i]#.}
}
Again, each ls_cols
item is being tested, but what's the pattern this time? Well, there a clue in the comment. Digging into the regular expression ^(\*|\.)
we have this:
Pattern | Matches |
---|---|
^ |
Anchors to the start of the string |
( | ) |
Matching either of two values |
\* |
An actual asterisk |
\. |
An actual period |
So it seems as though this regular expression would actually match some of the items that the previous one would - anything beginning with an asterisk, basically. Perhaps now would be a good time to look at a larger examples of a LS_COLORS
value in the wild. I'll use the dircolors
command to produce one, as described in the Configuring LS_COLORS article I mentioned earlier (I've artificially wrapped the LS_COLORS
line to fit better into this blog post format):
; dircolors
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01
:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st
=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.
lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=0
1;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=
01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*
.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=0
1;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*
.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01
;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;3
5:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xp
m=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01
;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*
.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vo
b=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35
:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=0
1;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*
.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=
00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*
.opus=00;36:*.spx=00;36:*.xspf=00;36:';
export LS_COLORS
This is another pattern in the *nix world - some commands (like
dircolors
here) output further commands that can be executed usingeval
; in other words, if you runeval $(dircolors)
you'll end up with yourLS_COLORS
variable set to the value you see, and also exported. Nice!
In the items in LS_COLORS
above, there are two different types of values before the equals signs. It's easier to see if we display the items on separate lines. There are many ways to do this, one might be to use tr
to translate each separating colon (:
) to a newline, like this:
; echo $LS_COLORS | tr ':' '\n'
rs=0
di=01;34
ln=01;36
mh=00
pi=40;33
so=01;35
...
Not bad. But there's a more Bash specific way, and that is to make use of the parameter expansion mechanism we've already seen, to perform a string replacement like this:
; echo -e ${LS_COLORS//:/\\n}
rs=0
di=01;34
ln=01;36
mh=00
pi=40;33
so=01;35
...
This is another form of:
${parameter/pattern/string}
But in this case, because the actual pattern starts with a forward slash (i.e. the second one in LS_COLORS//
), the replacement is "global" rather than singular, i.e. every occurrence of pattern
is replaced with string
. The string
in this case is \\n
which is a newline, where the first backslash is an escape, so that the second backslash (which goes with the n
to make the newline character \n
) is interpreted as an actual backslash.
Moreover, by default, echo
suppresses any interpretation of backslashes in the way that we want, so the -e
option is needed here to enable that interpretation (so that \n
is actually interpreted as a newline).
Anyway, this larger example value for LS_COLORS
shows that not only are there items starting with an asterisk, but also other two-character items - these represent file types. Examples are ln
for symbolic links, di
for directories, so
for sockets, and so on.
Now the second regular expression ^(\*|\.)
that also matches items beginning with asterisks makes more sense, in that beyond what's matched here as well, there are other item types, and fits with the "if they aren't types of files..." comment.
But anyway, back down to business. What is to be done with LS_COLORS
items that match this second regular expression - items that are not file types? Let's take a closer look, bearing in mind what's in the comment that hints at prefixing "ls_" to these items:
ls_cols[i]=${ls_cols[i]#\*}
ls_cols[i]=ls_${ls_cols[i]#.}
This is effectively a two-pass change of the item value, by means of a parameter expansion mechanism, specifically the ${parameter#word}
variety. Working through these two passes, based on the example value of *.ogg=0;36
, this is what happens:
*
) is removed, resulting in .ogg=0;36
.
) is removed, and the string ls_
is prepended, resulting in ls_ogg=0;36
I'm honestly not sure what the significance of this prefix is, but I guess we'll find out later.
I had a hard time remembering the difference between the meanings of the
${parameter#word}
and${parameter%word}
varieties (and their double versions, i.e.${parameter##word}
and${parameter%%word}
) until I decided to think about#
being the character to introduce a comment at the start of a line, and%
being the percent character that one puts after (at the end of) a number.
Once this loop is complete, we may have some value in ls_patterns
(if there were some items that started with an asterisk but without an immediately following period), and we definitely have all the item values in the ls_cols
array, some of which will have been modified.
There's now a further modification, as the comment explains, to remove characters that wouldn't be allowed in a variable name. We'll see why this is important shortly. The modification here is not within an explicit loop, but in one single go - it's quite spectacular:
# Strip non-ascii characters from the string as they're
# used as a key to color the dir items and variable
# names in bash must be '[a-zA-z0-9_]'.
ls_cols=("${ls_cols[@]//[^a-zA-Z0-9=\\;]/_}")
The comment is great, not only because it tells us what's happening, but also why it's being done. I often wish more comments in the code that I have to read reflected the "why" as well as the "what". But that's a story for another time.
If we stare at the actual executable line for a bit, it's not that scary. It's the assignment of an array to ls_cols
(which is what it is already, but the point here is that we want to modify values within it). This is the array assignment bit:
ls_cols=(...)
In the Storing Values section of the Bash Hackers Wiki this is called a "compound assignment". But what is being assigned? Well, it's this:
"${ls_cols[@]//[^a-zA-Z0-9=\\;]/_}"
And by now we should recognise that immediately, albeit in a slightly different guise. It's our old friend the ${parameter/pattern/string}
parameter expansion, but this time, applied not to a scalar variable but to an array. The Shell Parameter Expansion section for this variation has this to say: "If parameter is an array variable subscripted with ā@ā or ā*ā, the substitution operation is applied to each member of the array in turn, and the expansion is the resultant list.".
That's what we could probably guess would happen, but it's nice to have the behaviour described explicitly.
So what's happening in this parameter expansion? Well, the pattern is [^a-zA-Z0-9=\\;]
, matching anything that isn't alphanumeric or an equals sign, an actual backslash or a semicolon, and replacing all occurrences (all because the second forward slash signifies "global") with underscores. And this global replacement is performed on each member of the ls_cols
array.
A short visualisation might help here. Let's say we have five items in the ls_cols
array; we can set that up like this:
; ls_cols=(ab-c D%EF gh\\i =jkl x123)
Applying this parameter expansion and printing the results, one item on each line, looks like this
; printf "%s\n" "${ls_cols[@]//[^a-zA-Z0-9=\\;]/_}"
And the output is as follows:
ab_c
D_EF
gh\i
=jkl
x123
There's a bit more processing before we're done with this function. Again, the comments are great. Let's take a look:
# Store the patterns in a '|' separated string
# for use in a REGEX match later.
ls_patterns=${ls_patterns//\*}
ls_patterns=${ls_patterns%?}
This is another two-pass process, modifying the contents of ls_patterns
, which, if it contains anything*, has pipe-separated pattern content.
*the dircolors
-generated value for LS_COLORS
didn't have any such "patterns" at all
In the first pass here, any and all asterisks are removed. Then in the second pass any trailing question mark is removed. This makes sense, as asterisks and question marks have special meaning in regular expressions and if they were left in, they'd take on those meanings, which is very unlikely to be what's wanted.
The contents of the ls_cols
array are string items from the LS_COLORS
variable, parsed and modified. Some of them -- the ones that are not file types like di
, so
and so on -- will have been prefixed with ls_
too.
The last thing this get_ls_colors
function does is to make these available as variable names. Here's the line, with the comments that go with it:
# Define the ls_ variables.
# 'declare' can't be used here as variables are scoped
# locally. 'declare -g' is not available in 'bash 3'.
# 'export' is a viable alternative.
export "${ls_cols[@]}" &> /dev/null
I was a bit confused at first as to what the comments signified, but a little digging down, via the git blame
feature, allowed me to peel back the palimpsest to reveal earlier versions of this part of the function.
The earliest occurrence of this form appeared in Feb 2019, and replaced a previous use of source
with the following:
# Declare all LS_COLORS variables.
declare -g "${ls_cols[@]}"
The actual use of
source
that this replaced is amazing in its own right, and I know I can learn from it (and the linked source/eval meme image), but I'll leave that for another time:source /dev/stdin <<< "${ls_exts/#;}" >/dev/null 2>&1
So the first version of this was using declare
, with the -g
option, to declare those values as variables. If you're interested in learning more about declare, you may enjoy the post Understanding declare. It includes a quote from the help page that talks specifically about using the -g
("global") option when declare
is used within a function (as it is here) to ensure the variables are not just local to that function.
Later the same day, this was changed to almost what we have now:
# Define the ls_ variables.
export "${ls_cols[@]}"
Additionally, this was expanded to
# Define the ls_ variables.
export "${ls_cols[@]}" &>/dev/null
to deal silently with anything that might go wrong with the export, and just a few days after that, in a cleanup commit, the comments that we see now were added.
Now that we have glimpsed a little of the history, and (via Understanding declare) know that -g
must be used with declare
within a function to declare variables that have an existence beyond the function's scope, the comments make more sense.
I guess we'll find out how these variables are used elsewhere in the script, but for now, this brings this post, on get_ls_colors
, to an end.
I have again learned a lot by poring over the details of this, and I'm always happy to hear from you too. Has this helped? Did I miss something, or get something wrong? Whatever it is, please feel free to let me know in the comments mechanism below. Thanks for reading this far, and thanks especially to my son Joseph for a great eye and some very helpful observations!
Next comes a lovely line, with a comment:
# Turn $LS_COLORS into an array.
IFS=: read -ra ls_cols <<< "$LS_COLORS"
Don't confuse
=:
with any sort of assignment operator you might have seen elsewhere (such as:=
in Go or Mathematica) - it's just the assignment (=
) of a colon (:
) toIFS
.
We saw one use of read
in part 1 but that was more about how the read flags were constructed and used. Here we have another use of read
, arguably a very common one, i.e. in combination with a temporary setting of a value for the IFS
environment variable. By temporary, I mean that the assignment holds just for the rest of that same line only.
Let's break it down: we have the explicit setting of IFS
, a read
statement, which is being given the value of the LS_COLORS
variable as its input, via the rather splendid looking <<<
.
I managed to get this post finished while stuck on the ICE 1011 from DĆ¼sseldorf to Frankfurt which has been stationary for over two and a half hours already (and we're still stationary) due to a serious incident further down the line.
]]>My friend and colleague Rui was asking today about finding directory information for a given global account on SAP's Business Technology Platform (BTP). Of course, being an awesome chap, he was asking in the context of the BTP CLI tool, btp
.
For a given (fictitious) trial global account, let's say I have a structure that looks like this:
With the btp
tool, I can see this information cleanly in my environment of choice, my Bash shell, with this invocation:
btp get accounts/global-account --show-hierarchy
which shows me something like this:
Showing details for global account 42bb4252-2b49-4685-bcd7-62c8d85d8b13...
āā 7f81446xtrial (42bb4252-2b49-4685-bcd7-62c8d85d8b13 - global account)
ā āā trial (b3f3b2a3-d96d-4bea-8bbf-57ee84a9fc23 - subaccount)
ā āā mydir (e6cde265-5d78-4e7c-a8cb-8625a4daaa04 - directory)
ā ā āā fruit (4cc2e8f8-8cef-4828-82af-9b5adae387de - directory)
ā ā ā āā apple (3b4ba347-973f-4571-b4af-7862886104be - directory)
ā ā ā āā banana (0b58163d-5c49-4eb5-b359-17c9a0d94138 - directory)
ā ā āā this and that (4a050bbe-5c4d-4ac0-9a66-d7e513f8b4c8 - directory)
type: id: display name: parent id: parent type:
global account 42bb4252-2b49-4685-bcd7-62c8d85d8b13 7f81446xtrial
subaccount b3f3b2a3-d96d-4bea-8bbf-57ee84a9fc23 trial 42bb4252-2b49-4685-bcd7-62c8d85d8b13 global account
directory e6cde265-5d78-4e7c-a8cb-8625a4daaa04 mydir 42bb4252-2b49-4685-bcd7-62c8d85d8b13 global account
directory 4cc2e8f8-8cef-4828-82af-9b5adae387de fruit e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
directory 3b4ba347-973f-4571-b4af-7862886104be apple 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 0b58163d-5c49-4eb5-b359-17c9a0d94138 banana 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 4a050bbe-5c4d-4ac0-9a66-d7e513f8b4c8 this and that e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
The directories are hierarchically related, as you can see from the graphical depiction. But they're presented in a nice flat list towards the end, and I'm tempted to use standard *nix tools to parse that information out.
First, I'm only interested in the data in the second half, in the table that has headers like "type:", "id:" and so on. So I can remove all the lines up to (and including) that header line like this:
btp get accounts/global-account --show-hierarchy \
| sed -e '1,/^type:\s/d'
This gives me the following:
global account 42bb4252-2b49-4685-bcd7-62c8d85d8b13 7f81446xtrial
subaccount b3f3b2a3-d96d-4bea-8bbf-57ee84a9fc23 trial 42bb4252-2b49-4685-bcd7-62c8d85d8b13 global account
directory e6cde265-5d78-4e7c-a8cb-8625a4daaa04 mydir 42bb4252-2b49-4685-bcd7-62c8d85d8b13 global account
directory 4cc2e8f8-8cef-4828-82af-9b5adae387de fruit e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
directory 3b4ba347-973f-4571-b4af-7862886104be apple 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 0b58163d-5c49-4eb5-b359-17c9a0d94138 banana 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 4a050bbe-5c4d-4ac0-9a66-d7e513f8b4c8 this and that e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
Now I can more reliably look for the directory
entries, right?
btp get accounts/global-account --show-hierarchy \
| sed '1,/^type:\s/d' \
| grep '^directory\s'
This gives me:
directory e6cde265-5d78-4e7c-a8cb-8625a4daaa04 mydir 42bb4252-2b49-4685-bcd7-62c8d85d8b13 global account
directory 4cc2e8f8-8cef-4828-82af-9b5adae387de fruit e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
directory 3b4ba347-973f-4571-b4af-7862886104be apple 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 0b58163d-5c49-4eb5-b359-17c9a0d94138 banana 4cc2e8f8-8cef-4828-82af-9b5adae387de directory
directory 4a050bbe-5c4d-4ac0-9a66-d7e513f8b4c8 this and that e6cde265-5d78-4e7c-a8cb-8625a4daaa04 directory
Now all I need to do is grab the value of the third column ... oh, wait.
The directory named "this and that" is going to give me a bit of a headache; because it's not tabs that separate the columns, but normal spaces, I can't easily distinguish the spaces separating the columns from the spaces separating the words in the name "this and that".
While it's possible I could come up with some solution here, I think it's getting a little complex.
The BTP CLI sports a JSON output mode. This provides me with a more reliable and predictable data structure that I can parse. The natural tool to reach for here is of course jq, the "lightweight and flexible command-line JSON processor".
However, the structure of the JSON in this particular case is not regular; the nesting of parent and child objects reflects the structure of the actual hierarchy in the global account. That makes sense of course, but to be honest, the prospect of wielding some jq
incantation to parse an object structure that I cannot know in advance felt a little scary; I had visions of recursive procedures and more.
Here's a short section of the JSON output, to show you what I mean:
As it turns out, finding the objects in this hierarchy of nested parents and children turned out to be not as scary as I thought. Here's the invocation again, this time passing the option --format json
when invoking btp
, and parsing the output with jq
:
btp --format json get accounts/global-account --show-hierarchy \
| jq -r 'recurse | objects | select(.directoryType=="FOLDER") | .displayName'
This produces the following output to STDOUT:
fruit
banana
apple
this and that
Wonderful!
Here's a quick summary of what each of the items in the jq
pipeline does:
recurse
: Recursively descends .
, producing every valueobjects
: Selects only inputs that are objectsselect(boolean_expression)
: produces its input unchanged if the expression returns trueThe boolean expression used with select
ensures that only objects that have a JSON property directoryType
with the specific value FOLDER
are picked out. From those, just the value of the displayName
property is then produced.
And that's it. Not as scary as I thought.
I need to improve my jq
fu, and using it like this to process output from CLI tools such as btp
is one way of doing it.
Today I wrote a script checksubmissions
to check submitted pull requests in the SAP-samples/devtoberfest-2021 repo related to Devtoberfest from SAP this year, specifically the Best Practices week.
In its current form, at the time of writing, the script follows a pattern that I've used for a while now
main
function definitionmain
function, passing on to it any parameters that were supplied when the script is invokedFor the sake of illustration, here's a super simplified script called myscript
that follows that pattern:
#!/usr/bin/env bash
set -o errexit
func1() {
echo "Inside func1"
}
main() {
echo "Running main with $*"
func1
}
main "$@"
Sometimes, especially when building out scripts like this, I like to test the functions individually, from the command line if possible. In the example from today, I check every pull request each time, initiated from main
like this:
main() {
getprs | while read -r number title; do
check "$number" "$title"
sleep 1
done
}
main "$@"
(Source)
But while developing, I wanted to test out the check
function (source) manually on a single pull request. Of course, editing the script to do that wasn't much of an effort, but I wondered if there was another way.
What if, from the shell prompt, I could source the script, to bring the function definitions into my current environment, and then manually invoke the check
function on a single pull request?
Sourcing the script as it is would have the unwanted effect of running checks on all the pull requests, because the last line in the script actually invokes main
, as it's supposed to.
It turns out that it is possible to determine whether a script is being sourced just by close examination of the $0
variable.
There's also the
BASH_SOURCE
environment variable, which I want to look into as well (e.g. by reading this StackOverflow post: Choosing between $0 and BASH_SOURCE) but that's for another time.
First, I can replace the simple invocation:
main "$@"
with this:
if [[ $0 =~ ^-bash ]]; then
return 0
else
main "$@"
fi
When the script is executed, the value of $0
is the name of the script. But when it's sourced, it's -bash
.
Now, if I implement this change in the super simplified myscript
above, then this is what happens, at a Bash shell prompt.
First, to show there's nothing up my sleeve, an attempt to invoke func1
fails:
; func1
-bash: func1: command not found
Now I execute the script, and it behaves as expected:
; ./myscript hello world
running main with hello world
Inside func1
Of course, we still don't have the func1
function available to us:
; func1
-bash: func1: command not found
But what if I source the script rather than execute it? I can do that with source myscript
or simply . myscript
:
; source myscript
Nothing seems to happen. Which is good -- we don't get the "running main with hello world" or "Inside func1" output.
But now the definition of func1
is available, and we can run it:
; func1
Inside func1
That seems rather appealing!
This is early days, I may have missed a fundamental gotcha, but for now, I've found a way (which vaguely reminds me of Python's if __name__ == "__main__"
pattern) to be able to reduce that (already small) gap even further between the interactive shell and script content.
Bash functions seem to sit in a sweet spot between aliases and full blown scripts. I've defined a number of functions in my dotfiles which are all useful. Unlike aliases, they can take parameters and have greater scope for doing things; unlike scripts, they run in the context of the current shell which means, for example, that I can set a value in a variable during the course of a function's execution and it's available directly afterwards, in the same shell session.
Anyway, in the context of thinking about functions more, I decided to write a "wrapper" around one of the CLI tools I'm using a lot at the moment, the btp CLI, i.e. the command line tool for administration of resources and services on the SAP Business Technology Platform (BTP). If you want to learn more about the btp CLI and, indirectly, about BTP, have a look at the corresponding SAP Tech Bytes series of blog posts, starting with SAP Tech Bytes: btp CLI - installation, and the associated branch in the SAP Tech Bytes repo on GitHub.
The btp CLI, like other tools that manage cloud-based resources, is quite verbose in its output; partly because it needs to impart a lot of information, and partly because cloud resources, being "cattle, not pets", have identities that are more likely to be long GUIDs than short human-friendly names. For more on this, see this post in my Monday morning thoughts series: A cloud-native smell.
So when invoking the btp CLI with an action to look at the hierarchy of directories and subaccounts in a global account, the output tends to wrap around, like this:
But with a simple wrapper function, I can have this a lot cleaner; granted, if there's something at the end of the long lines that I'm interested in, then I won't be able to use this, but it's usually information at the start of the lines that's important to me.
Here's a wrapper function for the btp CLI that I've just started to use:
btp ()
{
if [[ $1 =~ ^(get|list)$ ]]; then
"$HOME/bin/btp" "$@" | trunc
else
"$HOME/bin/btp" "$@"
fi
}
When I use btp
to display information, with the get
or list
actions, this will run the real btp CLI, passing it whatever arguments I passed to the function (i.e. in $@
), piping the output to the cut
command where I use the -c
option to tell it to only output characters from column 1, up to however many columns the current terminal has (which can be determined with tput cols
).
For those of you who are #HandsOnSAPDev pioneers, you may recognise this, as we encapsulated
cut -c 1-$(tput cols)
astrunc
.
Now, running that same command, the output looks like this:
It's early days, but I quite like the way I can use the power of functions like this.
]]>At the end of the working day I'm tired, but there's often just enough energy left in my brain to explore new options for some Unix commands, and practise my shell fu. Here's a few trivial things that I just learned, by writing a pipeline to choose and display a new theme for my current terminal of choice, kitty.
There's a nice selection of themes for kitty; I've installed the contents of the repo in the right place and can select a theme by adding an include at the bottom of my kitty.conf
file:
include ./theme.conf
and then creating theme.conf
as a symbolic link pointing to the actual theme configuration file (from the repo) that I want to use:
lrwxr-xr-x 1 i347491 staff 72 13 Sep 18:07 theme.conf -> kitty-themes/themes/SpaceGray.conf
This was achieved with the following:
cd $DOTFILES/config/kitty/ \
&& find kitty-themes/themes -name '*.conf' \
| shuf -n 1 \
| xargs -J % ln -fsv % theme.conf \
| grep -P -o '\w+(?=\.conf$)'
This uses find
to look for conf
files in the kitty-themes/themes/
directory within my dotfiles configuration for kitty
. The output of such a find
command looks like this:
kitty-themes/themes/SpaceGray_Eighties_Dull.conf
kitty-themes/themes/Monokai.conf
kitty-themes/themes/Floraverse.conf
...
I pass this list to shuf
(short for "shuffle"), which "generates random permutations" and ask it via -n 1
to only give me one back.
Then of course it would be nice to use xargs
to pass that single, random theme file name, for example kitty-themes/themes/SpaceGray.conf
, to the ln
command to create a symbolic link. The thing is, xargs
puts what it's given at the end of the list; in other words, if we did this:
echo kitty-themes/themes/SpaceGray.conf | xargs ln -fsv theme.conf
then the ln
command invoked would be the wrong way round, i.e.:
ln -fsv theme.conf kitty-themes/themes/SpaceGray.conf
instead of
ln -fsv kitty-themes/themes/SpaceGray.conf theme.conf
Luckily xargs
has the -J
option which allows us to specify a pattern, and then refer to that pattern to insert the value appropriately, which is what is happening here - the %
is the pattern and shows where in the ln
command the value should be put:
xargs -J % ln -fsv % theme.conf
What of the ln
command itself? Well there's the -s
option which is the main deal, i.e. we want to create a symbolic link. The -f
option tells ln
to not worry about any existing file (i.e. if there's already a theme.conf
) and to just overwrite it. And the -v
is a verbose option which outputs what is being done.
This last -v
option is used so that I can get the name of the randomly selected theme. Without the last part of the pipeline, into grep
, we'd see something like this, output because of this -v
option to ln
:
theme.conf -> kitty-themes/themes/SpaceGray.conf
So we can then pipe this into grep
to grab the SpaceGray
part, invoking the powerful Perl Compatible Regular Expression (PCRE) class of regular expressions (for which I have to use the -P
option) to be able to use a positive lookahead assertion (?=\.conf$)
to say what we're trying to match, \w+
(a sequence of at least one word character), must be directly followed by .conf
up against the end of the line ($
).
Such an assertion is not part of the actual match, which means we can then simply use the -o
option to tell grep
to output just the match itself, i.e.:
SpaceGray
I use this technique in getbtpcli (see line 61) - if you're interested in reading more about this, have a look at the blog post SAP Tech Bytes: btp CLI - installation, and the comments too.
And that's pretty much it. Nothing earth shattering, but certainly a couple of things that I found out (in particular ln
's -v
option and xargs
's -J
option.
Happy learning!
]]>fff
is "a simple file manager written in Bash". As I'm always on the lookout to learn more about Bash, that description got my attention immediately. It's a small but perfectly formed offering, complete with man page and even a Makefile
for installation. And the file manager executable* itself is a single Bash script.
*I use this term deliberately, and it does make me stop and think every time I see scripts in a bin
directory (where "bin" stands for binary). But that's a conversation for another time.
The author, Dylan Araps has produced other interesting pieces of software (such as neofetch) as well some great documents such as the pure bash bible as well as the pure sh bible. He's also the creator of Kiss Linux. He has a reputation for writing great Bash code, so this seems like an opportunity too good to miss to learn from better programmers.
It seems that recently Dylan has disappeared off the radar, I don't know what the situation is but I wish him well.
Anyway, I wanted to take a first look at fff
to see what I could discern. I'm reviewing the code as it stands at the latest to-date commit, i.e. here.
Where I can, I link to reference material so you can dig in further to Bash details that take your fancy. This reference material includes the following sites (and there are more of course):
bash
tag)As I mentioned recently in Learning by rewriting - bash, jq and fzf details, I like to structure Bash scripts into functions, with a main
function towards the end, followed by a simple call to that function, passing in everything that was specified on the command line via the $@
special parameter which "expands to the positional parameters, starting from one" (positional parameter zero is the name of the script itself).
This is a practice I picked up, I think, from Google's Shell Style Guide - see this section for details. I wrote about this guide last year in Improving my shell scripting.
Dylan structures fff
in the same way, and uses the main
pattern too. For me, that's a good affirmation of this approach.
The main
function itself begins with a series of comments indented to the same level as the rest of the function body.
main() {
# Handle a directory as the first argument.
# 'cd' is a cheap way of finding the full path to a directory.
# It updates the '$PWD' variable on successful execution.
# It handles relative paths as well as '../../../'.
I used to oscillate between putting comments that described a function before the function definition, and within the function definition. On balance I prefer the comments to be within, so the entire function content is encapsulated within the {...}
brace-bound block.
The comment here is interesting too; it shows that a knowledge of side effects (the setting of a value in $PWD
) can be useful, and also a willingness to use cd
itself; to quote Ward Cunningham, "the simplest thing that could possibly work" (this came up in a great interview with Ward, which I've transcoded to audio format in my "Tech Aloud" podcast - see The Simplest Thing that Could Possibly Work, A conversation with Ward Cunningham).
The first actual executable line is now ready for our gaze, and it's a beauty.
cd "${2:-$1}" &>/dev/null ||:
What can we unpack from that?
Let's start with the parameter expansion used here: "${2:-$1}"
. The ${parameter:-word}
form lets you specify a default value, basically; if the value of 'parameter' is unset or null, then the expansion of 'word' is substituted.
First of all, the idea is that if a value is specified when fff
is invoked, it's used as the directory to start in. Now that's established, let's dig in a little more.
The first question that comes to mind is why is $2
(the second parameter) specified first, falling back to $1
here? Well my take is that it's again a simple but effective way of handling optional parameters when the script is invoked.
There has been an awful lot written about how best (and how not) to parse script parameters in Bash, from roll-your-own solutions, the use of the switch
statement, and of course the getopts
builtin. Each approach has its merits and downsides, and there doesn't seem to be a single, universal ideal.
If we read a little further ahead in the main
function, we notice checks for various options in $1
:
Check for | Check with |
---|---|
Version information | [[ $1 == -v ]] && { ... } |
Help | [[ $1 == -h ]] && { ... } |
Some custom file picker processing | [[ $1 == -p ]] && { ... } |
So we know from this that there are at least three option parameters that fff
understands, and that they are expected before anything else (the starting directory, if any) is passed on invocation (I guess we can assume that Dylan doesn't expect more than one of them to be specified in any single invocation too).
Knowing this, the "${2:-$1}"
incantation is easier to understand: it tries for the value of the second parameter to be the directory to start in, assuming that one of the option parameters might have been specified first. But if a option parameter wasn't specified, then any starting directory would not be in $2
, but in $1
, which the parameter expansion deals with perfectly here.
I think this potentially saves some unnecessary conditional logic that would otherwise make this section of main
more verbose. I like it!
What if no starting directory was specified at all? What if a value was specified but that value wasn't a directory, wasn't something that was going to make sense being passed to cd
?
Giving an inappropriate value to cd
results in an error, for example:
$ cd foo # foo doesn't exist
-bash: cd: foo: No such file or directory
or even just:
$ cd testfile # this is a file not a directory
-bash: cd: testfile: Not a directory
The behaviour actually appropriate in these cases is just to allow the cd
invocation to fail, and for fff
to start in whatever directory we happen to be in. We don't want to see any error messages, so they're redirected to /dev/null
. This redirection construct used is quite interesting in itself, though.
What we see here is &>
and according to 3.6.4 Redirecting Standard Output and Standard Error in the Bash manual, it's the preferred short form for redirecting both standard output (STDOUT) and standard error (STDERR) to the same place. Indeed, we see that
&>/dev/null
is equivalent to
>/dev/null 2>&1
Moreover, you may be happy to find out that this in turn is a short form of
1>/dev/null 2>&1
because the standard three file descriptors that are opened are:
Descriptor | Representation |
---|---|
0 | standard input (STDIN) |
1 | standard output (STDOUT) |
2 | standard error (STDERR) |
This is just my guess, but because redirecting standard output to a file is very common, the simple short form >
(for 1>
) is very useful and more logical to allow than a short form for redirecting standard error (2>
).
Note that when using redirection, the order of redirection is important:
>/dev/null 2>&1
is not the same as
2>&1 >/dev/null
So we have to be careful. One could therefore argue then that the use of this short form of &>
here in the main
function is helpful because there's only one part to the construct, so you can't get it "the wrong way round".
I recommend the wonderfully illustrated Redirection Tutorial in the Bash Hackers Wiki for lots more goodness on this subject.
The comment above the cd
invocation sort of explains the last bit:
# '||:': Do nothing if 'cd' fails. We don't care.
cd "${2:-$1}" &>/dev/null ||:
It's not unusual to see the logical operator for OR, i.e. ||
. What's interesting is that this operator is explained in the Bash manual in the context of lists - as separators within such lists.
So this invocation in the main
function is called an "OR list", i.e. command1 || command2
where command2
is executed if and only if command1
fails. What does "fail" mean? Well, return a non-zero exit status, basically.
So if the cd
command fails, what gets executed as command2? Well that's the even more interesting part. It's :
.
Yes, the colon is a shell builtin inherited from the Bourne shell (sh) and is the "no operation" command (a bit like, say, pass
in Python). In some ways it has a similar effect to what true
does (i.e. nothing, successfully) but it's also different, in that it will expand arguments and perform redirections. For example, you can specify stuff after the colon to manipulate files if needed.
Read more on this no operation or "null command" in What is the Bash null command?, and also take a look at this example, from pash
, a password manager written in POSIX sh
(from the same author), where :
appears with "side effects", using the :=
parameter expansion to assign default values to a couple of variables:
: "${PASH_DIR:=${XDG_DATA_HOME:=$HOME/.local/share}/pash}"
We'll examine the use of :=
later in this post when we come across it.
So now that we've looked through the interesting parts of this line, we can translate it to: "try to change directory to what was given in the second parameter when invoked, and failing that, the first parameter; don't show any errors or anything at all on the terminal, and if that fails generally, don't do anything". Simple and minimal. A great start!
Following this first line we have those tests we saw briefly earlier, the ones that check for and act upon specific option parameters. Interestingly the availability of these option parameters is not documented, at least as far as I can see - either in the man page or in the GitHub repo in general.
Anyway, I like the way these action-on-condition lines are written, they're short, concise and are also reminiscent of the sorts of expressions one sees in Perl scripts too (or is it the other way around - after all, Perl was created as an amalgam (and more) of various shell scripting substrates).
Looking at the first instance, we see this:
[[ $1 == -v ]] && {
printf '%s\n' "fff 2.2"
exit
}
Beyond the actual concise way this has been written, avoiding the wordy "if ... then ... fi" construct, there are a couple of things that are worth looking at.
Following the if
of the standard construct, we have a command list, the exit code of which is checked to determine how to proceed. How this command list is expressed has changed over the years, as we've moved from sh
to bash
and had POSIX to think about too.
More traditionally the condition $1 == -v
might have been introduced with test
, or expressed within single square brackets, i.e. [ $1 == -v ]
. The opening single square bracket is interesting in its own right, being a synonym for test
. In fact, while [
is built in to many shells (including bash
), it's also an external command, as is test
. In case you want to find out more, you may find this post interesting: The open square bracket [ is an executable.
These days one often sees the more modern version of double square brackets, as we see here. This is a construct also built into bash
and allows for a richer set of expressions within. For example, the operator =~
, which allows the use of a regular expression for matching, is not available within the [ ... ]
construct but is available within [[ ... ]]
. Moreover, there are different quoting rules; for example, in some cases, you can omit double quotes within some [[ ... ]]
-enclosed conditions.
Here are a couple of helpful answers with more information, on the Unix and Linux Stack Exchange:
Why is printf
used here, and not the arguably simpler echo
? The main differences between the two are:
echo
adds a newline character, printf
does notprintf
allows for and centres around a format stringThere's some amazing background information on echo(1) and printf(1), but for me the bottom line is that printf
gives you more control over the output. Perhaps for those versed in programming languages where there's a similar format string focused printf
function, using it feels more natural.
Throughout the entire fff
script there's no use of echo
, only printf
; my guess is simply that printf
is used here for consistency throughout. I also am guessing that the separation of the format string from any variable values allows for a consistency in expression - none of the uses of printf
in fff
have the format string in anything other than single quotes, meaning there's less to worry about in terms of variable expansions.
Before we leave this section, I think it's worth pointing out something minor but nonetheless interesting. I often have a usage
function that emits instructions to standard out, and would be called in the situation where help was requested. I do like the way that the Unix philosophy is used even here; there's man page content as we saw earlier, so why not use that instead? This also emphasises the extremely short distance between script and interactive command line, with shell languages:
[[ $1 == -h ]] && {
man fff
exit
}
Finally, let's take a quick look at the third option parameter here, -p
:
# Store file name in a file on open instead of using 'FFF_OPENER'.
# Used in 'fff.vim'.
[[ $1 == -p ]] && {
file_picker=1
}
It looks like this is the one that caused the introduction of the "${2:-$1}"
parameter expansion we examined earlier, when it was introduced with this commit: general: Added -p to store opened files in a file for use in fff.vim. In addition to the comment here, the title of the commit sort of gives it away ... -p
is for use from within the Vim plugin fff.vim which allows fff
to be used as a file picker within the editor.
One last thing that catches my eye here; this is the first time we see a variable assignment. The odd thing (to me) is that nowhere in the script is the file_picker
variable declared.
There is some usage of declare
elsewhere in the script, so we'll leave that examination until then, except to notice that this undeclaredness is not something that shellcheck
complains about. If you ask shellcheck
to check the source to fff
, and get it to explicitly exclude specific errors as it does in the CI configuration (none of them related to variable declaration):
shellcheck fff -e 2254 -e 2244 -e 1090 -e 1091
then shellcheck
ends calmly and quietly with no errors. Maybe my fervent desire to use declare
and local
liberally throughout my scripts is misguided?
The section that follows the processing of options is about handling certain contexts.
The first of which is for where we're running a relatively modern version of Bash:
((BASH_VERSINFO[0] > 3)) &&
read_flags=(-t 0.05)
There's so much to unpack from this; let's start with the BASH_VERSINFO
environment variable. What Bash environment variables are available, generally? Well, there's quite a few - getting the completion working for us with the Tab key, we see this:
$ echo $BASH<tab>
$BASH $BASH_ARGC $BASH_COMMAND $BASH_SOURCE
$BASHOPTS $BASH_ARGV $BASH_COMPLETION_VERSINFO $BASH_SUBSHELL
$BASHPID $BASH_ARGV0 $BASH_LINENO $BASH_VERSINFO
$BASH_ALIASES $BASH_CMDS $BASH_REMATCH $BASH_VERSION
There's BASH_VERSION
which is a string like this:
5.1.4(1)-release
But there's also BASH_VERSINFO
which is an array containing the various pieces of that version string, plus a bit more too:
$ for val in "${BASH_VERSINFO[@]}"; do echo "$val"; done
5
1
4
1
release
x86_64-pc-linux-gnu
I hadn't known of the existence of BASH_VERSINFO
until now. Using an element of this array is a better approach than parsing out the value from the BASH_VERSION
string.
Something else to unpack is the construct within which we find the reference to BASH_VERSINFO
too. That's the (( ... ))
construct, an arithmetic evaluation containing an arithmetic expression. I tend to think of these expressions as being in one of two categories:
(( answer = 40 + 2 ))
(( answer < 50 ))
There's a related construct called an arithmetic expansion which follows the usual Bash meaning of "expansion", whereby the evaluation of the arithmetic expression it contains is substituted as the result; the construct looks like this: $(( expression ))
.
Anyway, here we have an arithmetic evaluation acting as a condition in a short form of the if
construct. And what is executed if the condition is true, i.e. if the version of Bash is indeed greater than 3? Now that has had me scratching my head for a while. Not about what it is, but why Dylan used it.
This is what I'm talking about:
read_flags=(-t 0.05)
The read_flags
variable is used later in this main
function, in a call to read
, like this:
read "${read_flags[@]}" -srn 1 && key "$REPLY"
I thought it was quite unusual, or at least very deliberate, to have used an array (-t 0.05)
instead of just a string "-t 0.05"
here. Dylan used this directly in a single commit introducing the read_flags feature, as if it was obvious that this use of an array was the right thing to do from the outset. From a pragmatic point of view, it was clearly the right thing to do, as using a string like this:
read_flags="-t 0.05" read "$read_flags" -srn 1
would have resulted in read
complaining about the timeout (-t
) value, like this:
read: 0.05: invalid timeout specification
I had struggled a little with this, knowing it was related to the whitespace before the 0.05 timeout value, but couldn't quite figure it out myself. I asked on the Unix & Linux Stack Exchange and got some wonderful answers and insights, thank you folks. I'd encourage you to read the question and the answers supplied for enlightenment, if you're interested.
A side effect of the enlightenment that came my way from this was the fact that in preparing the error message above, I realised that a simple string could have been used here, as long as it was not quoted in the invocation:
read_flags="-t 0.05" read $read_flags -srn 1
This works fine and read
doesn't complain, because the shell is word splitting on whitespace and thus the rogue space between -t
and 0.05
which was being passed to read
is now consumed in the word splitting action. I'm so used to quoting variables because, since introducing shellcheck
into my scripting flow I'm constantly reminded to so by SC2086. I guess there are (rare) cases where you don't want to avoid word splitting on the value of a variable.
The next two checks are related to fff
options, based on the values of the environment variables FFF_LS_COLORS
and FFF_HIDDEN
. Both exhibit nice examples of a particular type of parameter expansion, one that we briefly noticed earlier in this post in the pash
script.
This is what those checks look like:
((${FFF_LS_COLORS:=1} == 1)) &&
get_ls_colors
((${FFF_HIDDEN:=0} == 1)) &&
shopt -s dotglob
The :=
form of shell parameter expansion lets us assign a parameter a value if it doesn't have one. To quote the documentation for this ${parameter:=word}
form:
If parameter is unset or null, the expansion of word is assigned to parameter. The value of parameter is then substituted. Positional parameters and special parameters may not be assigned to in this way.
So this is basically a default assignment, before the actual comparison with ==
. Taking the first example, we can think of it as: "If the FFF_LS_COLORS
variable is unset or null, assign it the value of 1
. Now, is the value of FFF_LS_COLORS
equal to 1
?"
The second example is similar, except that the default value to assign to FFF_HIDDEN
, before the actual comparison, is 0
not 1
.
This is a very succinct way of assigning default values, with an expression. In some ways the shape and action of :=
reminds me of Perl's ||=
. Or is it the other way round?
While the get_ls_colors
function is elsewhere in the fff
script and we'll get to that another time, it's worth taking a quick look at what's executed if FFF_HIDDEN
is 1
. The relevant section of the man page explains what this variable controls - whether hidden files are shown in the file manager or not. In fact, the explanation in the man page reflects the :=0
part of the parameter expansion (i.e. the default value is 0
, as shown in the man page):
# Show/Hide hidden files on open.
# (On by default)
export FFF_HIDDEN=0
How is this showing or hiding of hidden files controlled? Through the use of the "shopt" (shell option) builtin. While asking for help on this with shopt --help
will give you basic information -- such as how to set (with -s
) or unset (with -u
) the options -- it doesn't enumerate what the options are. For that I had to look at the shopt builtin of the Bash reference manual. The dotglob
option is described thus:
If set, Bash includes filenames beginning with a '.' in the results of filename expansion. The filenames '.' and '..' must always be matched explicitly, even if dotglob is set.
Pretty self explanatory and not unexpected; still, it was nice to be able to see a shell option in action.
In fact, there are other shell options in use a little bit further down in this main
function, and they're explained in comments, too:
# 'nocaseglob': Glob case insensitively (Used for case insensitive search).
# 'nullglob': Don't expand non-matching globs to themselves.
shopt -s nocaseglob nullglob
These are sensible options for a file manager, at least, they make sense to me. Incidentally, there are more glob-related shell options: extglob
, failglob
, globasciiranges
and globstar
.
At this point the options have been dealt with (and the trash and cache directories have been created); it's now time to set a few hooks to handle various signals, and then call various functions.
This is done with the trap
builtin, and there are two instances of this:
# Trap the exit signal (we need to reset the terminal to a useable state.)
trap 'reset_terminal' EXIT
# Trap the window resize signal (handle window resize events).
trap 'get_term_size; redraw' WINCH
Looking at the Bash Beginners Guide section on traps we can see that the trap
pattern is:
trap [COMMANDS] [SIGNALS]
Looking at the first instance, while it's common for specific and individual SIGNALS to begin "SIG", there's also a general "EXIT" value that can be used; this triggers both when the shell script terminates of its own accord, or is terminated by the user with CTRL-C.
Running this simple script terminator
and allowing it to exit, and then running it and CTRL-C'ing it after a second, demonstrates this:
trap 'echo EXITING...' EXIT
echo Press CTRL-C or wait 5 seconds to exit
sleep 5
Here's what happens:
$ bash terminator
Press CTRL-C or wait 5 seconds to exit
EXITING...
$ bash terminator
Press CTRL-C or wait 5 seconds to exit
^CEXITING...
It took me a bit longer than I thought to find definitive documentation on the signal in the second instance - SIGWINCH
(or WINCH
). Rather than being Bash specific, this is of course related to the interaction between processes and terminals in a Unix context. The Signal (IPC) Wikipedia page has a wealth of information, including a reference to SIGWINCH
in the list of POSIX signals. To quote:
The SIGWINCH signal is sent to a process when its controlling terminal changes its size (a window change).
The footnote reference associated with this leads to a recent (2017) proposal which suggests why this signal is less widespread in coverage and use. While I'm aware of some of the more common signals (such as SIGCHLD
, SIGINT
, SIGKILL
and so on) I'd never heard of SIGWINCH
until now.
The two functions that are called when this signal is trapped, get_term_size
and redraw
, make sense in the context of what this "window change" signal represents.
The final part of the main
script, after calling some functions to set things up, is what Dylan refers to as a "Vintage infinite loop". It's quite the eyecatcher:
# Vintage infinite loop.
for ((;;)); {
read "${read_flags[@]}" -srn 1 && key "$REPLY"
# Exit if there is no longer a terminal attached.
[[ -t 1 ]] || exit 1
}
Why not simply while true; do ...; done
? I can only summise this is something playful, an enjoyment of the relationship that Bash has (or doesn't have, mostly) to the C programming language, where this so-called three-expression for loop is widely used, with an initialiser, a loop continuation condition and a modifier that are separated by ;
characters. Common in C and related languages, but not so much in Bash, I would have thought.
What has got me thinking, however, is why and how does ((;;))
even work?
It's definitely a lesser used construct for loops in Bash; again, I had to search a little deeper to find official references to it. In Advanced Bash-Scripting Guide: Chapter 11. Loops and Branches, we see "Example 11-13. A C-style for loop":
LIMIT=10
for ((a=1; a <= LIMIT ; a++)) # Double parentheses, and naked "LIMIT"
do
echo -n "$a "
done # A construct borrowed from ksh93.
I'm not sure if the reference to the Korn shell (ksh93)* in the comment relates to the entire construct, or particularly to the triple semicolon-separated expression within the arithmetic evaluation (( ... ))
. In any case, while it's clear what this is and how it works, it remains to me somewhat of a mystery as to why the particular instance used in the main
function here works, where all three expressions are null ((;;))
.
*Since listening to a very enjoyable Committing to Cloud Native podcast episode 22 recently: Thoughts on Bash Becoming Interplanetary and More with Brian J. Fox I've become more aware of the relationship between Brian Fox (Bash's creator) and David Korn (ksh's creator), and the features and style of their respective shells, that they were striving to finalise in 1989 as replacements for Stephen Bourne's sh
.
I am guessing first that the for
knows to look for truthiness in the second expression (the one in between the two semicolons). That's a bit vague, I know. I'm also guessing that an empty value here is going to be "truthy", in that, according to the Bash Hackers Wiki section on truth, anything that's not 0 is true. That seems more likely, but I'd love to find out more about this.
In any case, it's not always going to be an infinite loop; there's a conditional expression within the loop to test whether a terminal is (still) attached. This is the [[ -t 1 ]]
part. Here's how the -t
test is described:
True if file descriptor fd is open and refers to a terminal.
In a happy circular twist of fate, we're back almost to where we started on this journey through the main
function. File descriptor 1 refers to STDOUT, i.e. standard output. If fff
(still) has its STDOUT connected to a terminal, then the loop continues. If not, it's terminated (|| exit
).
Is this all that the loop does? Well, the key part (if you forgive the pun) is the call to key
here:
read "${read_flags[@]}" -srn 1 && key "$reply"
The key
function handles keypresses, and acts accordingly. The core action part of fff
, effectively. And it's only called if read
successfully receives a keypress on that occasion. Nice!
So that's it for the main
function. Directory startup, option parameter handling, setup and initial calls, and the main loop. Such a lot to learn in so few lines.
If you're still reading, thank you for indulging me, and I hope you've enjoyed the journey as much as I have. There's plenty more to learn from this script; let me know if you found it useful and whether I should venture further.
Update: You may like to know that there's now a second part: Exploring fff part 2 - get_ls_colors.
]]>jq
by rewriting a friend's password CLI script.
My friend Christian Drumm published a nice post this week on Adapting the Bitwarden CLI with Shell Scripting, where he shared a script he wrote to conveniently grab passwords into his paste buffer at the command line.
It's a good read and contains some nice CLI animations too. In the summary, Christian remarks that there may be some areas for improvement. I don't know about that, and I'm certainly no "shell scripting magician" but I thought I'd have a go at modifying the script to perhaps introduce some further Bash shell, jq
and fzf
features to dig into.
I don't have Bitwarden, so I created a quick "database" of login information that took the form of what the Bitwarden CLI bw
produced. First, then, is the contents of the items.json
file:
[
{ "name": "E45 S4HANA 2020 Sandbox", "login": { "username": "e45user", "password": "sappass" } },
{ "name": "space user", "login": { "username": "spaceuser", "password": "in space" } },
{ "name": "foo", "login": { "username": "foouser", "password": "foopass" } },
{ "name": "bar", "login": { "username": "baruser", "password": "sekrit!" } },
{ "name": "baz", "login": { "username": "bazuser", "password": "hunter2" } }
]
Then I needed to emulate the bw list items --search
command that Christian uses to search for an entry. As far as I can tell, it returns an array, regardless of whether a single entry is found, or more than one. I'm also assuming it returns an empty array if nothing is found, but that's less important here as you'll see.
I did this by creating a script bw-list-items-search
which looks like this:
#!/usr/bin/env bash
# Emulates 'bw list items --search $1'
jq --arg name "$1" 'map(select(.name | test($name; "i")))' ./items.json
Perhaps unironically I'm using jq
to emulate the behaviour, because the data being searched is a JSON array (in items.json
). I map over the entries in the array, and use the select
function to return only those entries that satisfy the boolean expression passed to it:
.name | test($name; "i")
This pipes the value of the name
property (e.g. "E45 S4HANA 2020 Sandbox", "space user", "foo" etc) into the test
function which can take a regular expression, along with one or more flags if required.
Here, we're just taking the value passed into the script, via the argument that was passed to the jq
invocation with --arg name "$1"
. This is then available within the jq
script as the binding $name
. The second parameter supplied here, "i"
, is the "case insensitive match" flag.
The result means that I can emulate what I think bw list items --search
does:
; ./bw-list-items-search e45
[
{
"name": "E45 S4HANA 2020 Sandbox",
"login": {
"username": "e45user",
"password": "sappass"
}
}
]
Here's an example of where more than one result is found:
; ./bw-list-items-search ba
[
{
"name": "bar",
"login": {
"username": "baruser",
"password": "sekrit!"
}
},
{
"name": "baz",
"login": {
"username": "bazuser",
"password": "hunter2"
}
}
]
Now I could turn my attention to the main script. Here it is in its entirety; I'll describe it section by section.
#!/usr/bin/env bash
set -e
pbcopy() { true; }
copy_uname_and_passwd() {
local login=$1
echo "> Copying Username"
jq -r '.username' <<< "$login"
echo "> Press any key to copy password..."
read
echo "> Copying Password"
jq -r '.password' <<< "$login"
}
main() {
local searchterm=$1
local selection logins
logins="$(./bw-list-items-search $searchterm)"
selection="$(jq -r '.[] | "\(.name)\t\(.login)"' <<< "$logins" \
| fzf --reverse --with-nth=1 --delimiter="\t" --select-1 --exit-0
)"
[[ -n $selection ]] \
&& echo "Name: ${selection%%$'\t'*}" \
&& copy_uname_and_passwd "${selection#*$'\t'}"
}
main "$@"
For the last few months, my preference for laying out non-trivial scripts has been to use the approach that one often finds in other languages, and that is to define a main function, and right at the bottom, call that to start things off.
This call is main "$@"
which just passes on any and all values that were specified in the script's invocation - they're available in the special parameter $@
which "expands to the positional parameters, starting from one" (see Special Parameters).
I like to qualify my variables, so use local
here, which is a synonym for declare
. I wrote about this in Understanding declare in case you want to dig in further.
Because I have my emulator earlier, I can make almost the same-shaped call to the Bitwarden CLI, passing what was specified in searchterm
and retrieving the results (a JSON array) in the logins
variable.
Next comes perhaps the most involved part of the script, which results in a value being stored in the selection
variable (if nothing is selected or available, then this will be empty, which we'll deal with too).
Determining the selection part 1 - with jq
The value for selection
is determined from a combination of jq
and fzf
, which are also the two commands that Christian uses.
This is the invocation:
jq -r '.[] | "\(.name)\t\(.login)"' <<< "$logins" \
| fzf --reverse --with-nth=1 --delimiter="\t" --select-1 --exit-0
The first thing to notice is that I'm using <<<
which is a here string - it's like a here document, but it's just the variable that gets expanded and fed to the STDIN of the command. This means that whatever is in logins
gets expanded and passed to the STDIN of jq
.
Given the emulation of the Bitwarden CLI above, a value that might be in logins
looks like this:
[
{
"name": "bar",
"login": {
"username": "baruser",
"password": "sekrit!"
}
},
{
"name": "baz",
"login": {
"username": "bazuser",
"password": "hunter2"
}
}
]
Let's look at the jq
script now, which is this:
.[] | "\(.name)\t\(.login)"
This iterates over the items passed in (i.e. it will process the first object containing the details for "bar" and then the second object containing the details for "baz") and pipes them into the creation of a literal string (enclosed in double quotes). This literal string is two values separated with a tab character (\t
) ... but those values are the values of the respective properties, via jq
's string interpolation).
It's worth noting that the value of .name
is a scalar, e.g. "bar", but the value of .login
is actually an object:
"login": {
"username": "baruser",
"password": "sekrit!"
}
but this gets turned into a string. If "bar" is selected, then the value in selection
will be:
bar {"username":"baruser","password":"sekrit!"}
where the whitespace between the name "bar" and the rest of the line is a tab character.
So given the two values (for "bar" and "baz") above which would have been extracted for the search string "ba", the following would be produced by the jq
invocation:
bar {"username":"baruser","password":"sekrit!"}
baz {"username":"bazuser","password":"hunter2"}
Note that the -r
option is supplied to jq
to produce this raw output.
Determining the selection part 2 - with fzf
This is then passed to fzf
, which is passed a few more options than we saw with Christian's script. Taking them one at a time:
--reverse
- this is the same as Christian and is a layout option that causes the selection to be displayed from the top of the screen.--delimiter="\t"
- this tells fzf
how the input fields are delimited, and as we're using a tab character to separate the name and login information, we need to tell fzf
(using just spaces would give us issues with spaces in the values of the names).--with-nth=1
- this says "only use the value of the first field in the selection list", where the fields are delimited as instructed (with the tab character here). This means that only the value of the "name" is presented, not the "login" (username and password) details.--select-1
- this tells fzf
that if there's only one item in the selection anyway, just automatically select it and don't show any selection dialogue.--exit-0
- this tells fzf
to just end if there's nothing to select from at all (which would be the case if the invocation to bw list items --search
returned nothing, i.e. an empty array).Here's what the selection looks like if no search string is specified, i.e. it's a presentation of all the possible names:
Once we're done with determining the selection, we check to see that there is actually a value in selection
and proceed to first show the name and then to call the copy_uname_and_passwd
function.
Displaying the name and extracting the login details
It's worth highlighting that while fzf
only presents the names in the selection list, it will return the entire line that was selected, which is what we want. In other words, given the selection in the screenshot above, if the name "E45 S4HANA 2020 Sandbox" is chosen, then fzf
will emit this to STDOUT:
E45 S4HANA 2020 Sandbox {"username":"e45user","password":"sappass"}
(again, remember that there's a tab character between the name "E45 S4HANA 2020 Sandbox" and the JSON object with the login details).
So to just print the name, we can use shell parameter expansion to pick out the part we want. The ${parameter%%word}
form is appropriate here; this will remove anything with longest matching pattern first.
In other words, the expression ${selection%%$'\t'*}
means:
selection
variable$'\t'*
The $'...'
way of quoting a string allows us to use special characters such as tab (\t
) safely. The *
means "anything". So the pattern is "a tab character and whatever follows it, if anything".
So if the value of selection
is:
E45 S4HANA 2020 Sandbox {"username":"e45user","password":"sappass"}
then this expression will yield:
E45 S4HANA 2020 Sandbox
The expression in the next line, where we invoke the copy_uname_and_passwd
function, is ${selection#*$'\t'}
which is similar. It means:
selection
variable*$'\t'
This pattern, then, is "anything, up to and including a tab character".
Given the same value as above, this expression will yield:
{"username":"e45user","password":"sappass"}
This is very similar to Christian's original script, except that we can use a "here string" again to pass the value of the login
variable to jq
each time. Given what we know from the main
function, this value will be something like this:
{"username":"e45user","password":"sappass"}
which makes for a simpler extraction of the values we want (from the username
and password
properties).
While my main machine is a macOS device, I'm working in a (Linux based) dev container and therefore don't have access right now to the pbcopy
command. As I wanted to leave calls to it in the script to reflect where it originally was, this function that does nothing will do the trick.
There's always more to learn about Bash scripting and the tools we have at our disposal. And to use one of the sayings from the wonderful Perl community - TMOWTDI - "there's more than one way to do it". I'm sure you can come up with some alternatives too, and some improvements on what I've written.
Keep on learning and sharing.
]]>tee
, and netstat
options.
I enjoy finding time to catch up on reading blog posts and watching videos in my queue, but the time is often tinged with a slight uneasy feeling that I'm seeing things in passing which are not part of what the main content is about, and I'm not acknowledging or capturing that knowledge.
Here are three very small things I learned (or was reminded of) in passing today, and I thought I'd share them.
Often when using jq
, the command line JSON processor, what I'm looking for is a scalar string, when I just want to extract the value of a property.
$ echo '{"foo":"bar"}' | jq .foo
"bar"
I always vaguely thought that this was jq
just doing what I wanted and giving me the value, which was nice. But it was actually doing more than that. The Invoking jq section of jq
's manual has this (emphasis mine):
jq filters run on a stream of JSON data. The input to jq is parsed as a sequence of whitespace-separated JSON values which are passed through the provided filter one at a time. The output(s) of the filter are written to standard out, again as a sequence of whitespace-separated JSON data.
What jq
aims to do is not only read JSON, but write JSON to STDOUT, unless otherwise directed.
In the above invocation (jq .foo
) I didn't direct jq
to do anything special, so it wrote "bar"
on STDOUT.
And that's appropriate, because "bar"
is completely valid JSON.
I'd vaguely thought that JSON data was only valid in the context of a structure (a map or array) but had never looked into it properly. But my explorations of what jq
can do led me down the familiar path of wonder, whereupon I realised that, according to the most up to date specification of JSON, RFC 7159, a JSON text (this is as good a word as any to use as a name for a lump of JSON) "is a serialised value". This Stack Overflow answer is a good summary of the situation.
So when jq
gives you just a simple double-quoted string as the output for your incantation, it's giving you JSON. Which is what it is designed to do.
I realised this when watching David Hand - "Non-trivial jq".
It's easy to overlook this perhaps unloved and semi-forgotten Unix command. According to the (very brief!) man page, tee
is a "pipe fitting", which:
copies STDIN to STDOUT, making a copy in zero or more files.
The tee
command crops up in more places than you think; it appears regularly in installation commands. Take this example* from the installation instructions for Docker on Debian Linux:
$ echo "deb [arch=amd64 ...] https://.../linux/debian buster stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
*I've modified the actual command that appears in the Set up the repository section for readability.
Here's another example that appeared in the same video I mentioned earlier:
$ curl http://some.json/api | tee example1.json
In both cases, tee
is used to show the operator what text is flowing into the file. The text (that string starting "deb" in the first example, and the JSON resource retrieved with curl
in the second example) is shown on STDOUT ... and also written to the file specified (those being /etc/apt/sources.list.d/docker.list
and example1.json
respectively in these two examples).
I bring in tee
for specific use cases; for example, in this cache
script, to generate the output, show it, and cache it:
# If there's no cache file or it's older than N mins then
# run the command for real, cacheing the output (again).
if [ ! -f "$cachefile" ] \
|| test "$(find "$cachefile" -mmin +"$mins")"; then
"$@" | tee "$cachefile"
else
cat "$cachefile"
fi
But I want to use tee
more regularly in my daily scripting activities. With process substitution, it can be a powerful ally.
When I want to see what sockets are being listened to on a machine, my muscle memory types out:
$ netstat -atn | grep LISTEN
This is fine, and gives me what I want - the lines showing what ports are bound with listening processes. Here's an example:
$ netstat -atn | grep LISTEN
tcp4 0 0 127.0.0.1.53 *.* LISTEN
tcp4 0 0 127.0.0.1.28196 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49158 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49157 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49156 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49155 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49154 *.* LISTEN
tcp6 0 0 fe80::aede:48ff:.49153 *.* LISTEN
tcp4 0 0 *.22 *.* LISTEN
tcp6 0 0 *.22 *.* LISTEN
$
But I learned something as a side effect from reading a great post (which I auto-tweeted from my [https://github.com/qmacro-org/url-notes](URL Notes) repo today): Bringing the Unix Philosophy to the 21st Century - Brazil's Blog.
The author gave this example of using their jc
utility (which looks fascinating) to be able to more easily parse this sort of netstat
output:
$ netstat -tln | jc --netstat | jq '.[].local_port_num'
The -l
flag used here for netstat
is the short form of --listening
, and combined with -t
(--tcp
) and -n
(--numeric
) shows only TCP sockets that are being listened on. Here's an example:
$ netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN
tcp6 0 0 :::2222 :::* LISTEN
Of course, because we're not using grep
, we still get the heading output from netstat
here. But the --listening
does the job nicely!
Unfortunately, it won't be any time soon that I can switch to this option, because the macOS version of netstat
doesn't support -l
. In fact, it does have a -l
option but it's for something completely different (printing full IPv6 addresses).
That said, this is yet another small step towards me moving further away from macOS-local activities, and more fully to Linux based dev containers running on my Synology NAS. But that's a post for another time.
]]>tmux
using a popup menu.
I was looking at Waylon Walker's tmux fzf session jumper recently, and really liked it, so much so that I dug into the incantations that he shared, resulting in a new post on this autodidactics blog: tmux output formatting.
Anyway, I was still thinking about session switching yesterday, and randomly came across this Reddit post: how to bring up context menu without a mouse. I'd seen the context menu before, by accidentally triggering it with the right mouse button, and it looks something like this:
It was the comment by user Coffee_24_7 that really got me thinking - turns out that this type of menu can be called up with one of the myriad tmux
commands, one which I hadn't yet come across: display-menu
.
I thought that using a context menu like this to present a list of sessions to switch to would be fun and teach me more about the display-menu
command. Basically I just wanted to have the menu display the sessions I had, and when I'd selected one, switch me to it. So, this is what I did.
The first part of the tmux
man page for the display-menu
command looks like this:
display-menu [-O] [-c target-client] [-t target-pane] [-T title] [-x position] [-y position] name key command
Display a menu on target-client. target-pane gives the target for any commands run from the menu.
A menu is passed as a series of arguments: first the menu item name, second the key shortcut (or empty for none) and third the command to run when the menu item is chosen. The name and command are formats, see the FORMATS and STYLES sections. If the name begins with a hyphen (-), then the item is disabled (shown dim) and may not be chosen. The name may be empty for a separator line, in which case both the key and command should be omitted.
-T is a format for the menu title (see FORMATS).
So it looked like I would need something like this to have three sessions listed, with shortcut keys 1-3, and commands to switch to the selected one:
tmux display-menu \
writing 1 'switch-client -t writing' \
dotfiles 2 'switch-client -t dotfiles' \
focus 3 'switch-client -t focus'
(The session names in this example are actually the permanent sessions I use right now.)
From the tmux output formatting post we already know how to do this. The basic tmux list-sessions
command produces something like this:
dotfiles: 2 windows (created Thu Aug 12 10:06:53 2021)
focus: 1 windows (created Wed Aug 11 10:44:18 2021)
writing: 1 windows (created Wed Aug 11 10:46:21 2021) (attached)
To get just the session names, I can supply a format with the -F
option, like this (specifying #S
for "session name"):
tmux list-sessions -F '#S'
This produces:
dotfiles
focus
writing
For each of the menu entries, three values are needed - the session name, an incrementing identifier (which becomes the single key to press for selection), and the switch-client
command to switch to the selected session. There are many ways to turn this list of sessions into something like this; I'm going to use awk
here, for these reasons:
awk
and its historyNR
built-in variable that holds the record number being processed, and I can use it for the incrementing identifierORS
, the Output Record Separator (which is usually a newline), to a space, to avoid having to use something like tr
or paste
to bring everything onto one line afterwardsThe invocation now becomes:
tmux list-sessions -F '#S' \
| awk 'BEGIN {ORS=" "} {print $1, NR, "\"switch-client -t", $1 "\""}'
This produces:
dotfiles 1 "switch-client -t dotfiles" focus 2 "switch-client -t focus" writing 3 "switch-client -t writing"
Now I can just pass that entire output, via the venerable xargs
, to tmux
's display-menu
command. While I'm at it, I'll use the -T
option to supply a title for the top of the menu display.
This is what the invocation finally becomes:
tmux list-sessions -F '#S' \
| awk 'BEGIN {ORS=" "} {print $1, NR, "\"switch-client -t", $1 "\""}' \
| xargs tmux display-menu -T "Switch session"
It's worth putting this in a script, so I have done: session-menu
.
The final touch in this learning experiment is to bind this invocation to a key in tmux
, so that I can quickly invoke it. I'll choose "prefix Ctrl-s", which means the line I need to add to my config looks like this:
bind-key C-s run-shell session-menu
And with this in place, I can invoke the session switch menu popup very comfortably - this is what it looks like:
If I decide I don't want to switch sessions after all, I can just dismiss the menu with the standard key q
(this is also in the display-menu
part of the tmux
man page).
So there you have it. I do love fzf
and all the things it can do, but it's worth spending some time on this native tmux
feature. There's more to it, as well - for example, you can add separators and disabled items (like the ones in the first screenshot in this post) - but this will do me nicely for now. Happy multiplexing!
This week I came across Waylon Walker who is doing some lovely learning-and-sharing on the topic of tmux
, the terminal multiplexer. He has a YouTube channel and a blog, and there are plenty of tmux
nuggets that are explained in short video and blog post formats.
I read Waylon's post tmux fzf session jumper that he published yesterday, and, having been curious to learn more about his tmux
setup and usage, I stared a bit at the commands he'd shared. Here's what he was using, in his tmux
popup based session selector (I was so happy to learn about tmux
popups from Waylon, more on that topic another time, perhaps):
tmux bind C-j \
display-popup \
-E \
"tmux list-sessions \
| sed -E 's/:.*$//' \
| grep -v \"^$(tmux display-message -p '#S')\$\" \
| fzf --reverse \
| xargs tmux switch-client -t"
That's quite a bit to unpack and learn from! I've taken the liberty of inserting lots of newlines so we can stare at it more easily.
Before we start to unpack it, you can see what it does in this screenshot - on invocation, it brings up a session chooser in a popup for me to be able to switch to a different session. Nice!
The invocation itself is creating a new key binding (prefix
ctrl-j
) with the bind
(short for bind-key
) command. The command that is invoked when this key combination is used is a relatively new one: display-popup
. It seems that the popup feature appeared only about a year ago with tmux
version 3.2 - see the associated change notes and discussion for 3.2 for more context.
The -E
switch goes with the display-popup
command and causes the popup to close automatically when the shell command that's executed within it completes.
The shell command to execute within the popup follows, and is basically the rest of the line - everything inside the outermost double-quote pair ("tmux list-sessions ... -t"
). This is a pipeline that starts with the output of whatever tmux list-sessions
produces - here's an example of that output, from the sessions I'm running right now:
another session: 1 windows (created Fri Aug 6 11:24:34 2021) (attached)
tmux experiments: 2 windows (created Thu Aug 5 21:04:16 2021)
writing: 2 windows (created Fri Aug 6 11:02:10 2021)
I added a third session, "another session", to have more than just a couple for the example, and it's in that third session that I invoked the tmux list-sessions
command just now - which is why that is the one marked as (attached)
in the output.
That output is passed to sed -E 's/:.*$//'
which uses an extended (that's what the -E
denotes) regular expression to replace everything on the line* starting with a colon and going all the way to the end of the line, with nothing. This would change the output above to this:
another session
tmux experiments
writing
* sed
is a stream editor and processes each line in turn
This reduced output is then piped into grep
, the search utility. The -v
switch used inverts the match, effectively printing what is not matched by the regular expression that is given.
And what is that, exactly? It's this: \"^$(tmux display-message -p '#S')\$\"
. First, because we're already in the context of the pair of outermost double-quotes, the double-quotes used here to enclose the entire pattern need to be escaped, with the backslashes. And within those escaped double-quotes we have ^
that anchors the match to the start of the line, and $
(escaped again, because it has special meaning within double-quotes) which anchors to the end of the line. And what should actually be matched? If we stare hard enough, we see that what should be matched is the output of running the following command in a subshell ($(...)
):
tmux display-message -p '#S'
The display-message
command normally writes a message to the tmux
session's status line, but the -p
switch directs the command to write the message to STDOUT instead. What is the message to be written? Well, it's the value of #S
, which is a variable (identified by the #
), and specifically, S
is an alias for session_name
. So this command prints the name of the current session to STDOUT. In the same context as before (i.e. attached to the "another session" session), running this command would produce this:
another session
So the ultimate result of piping the list of session names into this inverted matching grep
invocation is that it would filter out the current session's name, resulting in:
tmux experiments
writing
Why do this? Well, it makes sense that if I'm going to pick another session to jump to, I probably won't want to jump to my current session.
This reduced list is then piped into fzf
, about which I've written a couple of posts on this autodidactics blog before:
Using fzf
here is perfect, it's my tool of choice for mini-UIs in the terminal where a choice has to be made, where an item (or items) must be selected. And because it fits with the Unix philosophy too, the output is simply the value of the item(s) selected. And in case you're wondering, the --reverse
switch is a synonym for --layout=reverse
which causes fzf
to display the selection from the top.
In the last part of the pipeline, this value (the name of the session that was selected) is passed to xargs
, the powerful but oft-misunderstood utility that helps you build and execute commands from STDIN. Here, with the invocation xargs tmux switch-client -t
, we're using it to pass that selected value (e.g. "writing") as a parameter, adding it to the end of the entire set of arguments passed to xarg
, resulting in an invocation like this:
tmux switch-client -t writing
This of course is the denoument to which we've been building up, and our tmux
client is switched to the session we selected. Success!
The reference to the #S
variable got me thinking, and I remembered seeing an awful lot of potential, particularly in the "FORMATS" section of the tmux
man page. So I thought I'd use this opportunity, having been inspired by what Waylon showed, to see if I could come up with a different way of doing it.
First, could I use something from "FORMATS" to avoid the need to invoke sed
in the pipeline (i.e. to not do this bit: sed -E 's/:.*$//'
)? Turns out that the answer is yes; a format string can be used with the list-sessions
command, with the -F
option. So this would be an alternative, using the same variable as we saw earlier. Here's the invocation:
tmux list-sessions -F '#{session_name}'
which produces:
another session
tmux experiments
writing
That is, none of the extraneous information is there, so we don't need to remove it.
We can go a bit further, too, with a conditional expression. Here it is, in action:
tmux list-sessions -F '#{?session_attached,,#{session_name}}'
The value of the session_attached
variable is 1
if the session currently being listed is attached (i.e. is the one we're in) and 0
if not. So this conditional expression #{?COND,A,B}
outputs A if condition COND is true, otherwise B. What a lovely ternary style operator just waiting to be used within tmux
formatting!
So this version produces:
tmux experiments
writing
The empty line comes from the fact that for the A value in the conditional expression above, we specified nothing (there was nothing between the two commas ,,
) when the session to which we're currently attached is listed.
That means we still have something to filter out, with grep
, but it becomes simpler, with the briefest of patterns needed for grep
, to match and filter out (with -v
) empty lines: ^$
- i.e. "the start of the line followed immediately by the end of the line". Like this:
tmux list-sessions -F '#{?session_attached,,#{session_name}}' | grep -v '^$'
This produces:
tmux experiments
writing
which then can be passed as before into fzf
for selection.
This is in no way an attempt to "better" Waylon's post - far from it. It's a different way of approaching things, but most importantly, it's a classic example of folks learning together and from each other. Thanks Waylon, I'm looking forward to learning more from what you share.
]]>GITHUB_ACTOR
on a re-opened pull request reflects the person re-opening it, not the original creator.
Over on the Community Guidelines content for SAP's Open Documentation Initiative there was a recent pull request (PR) that was opened by user cyberpinguin
.
We have a GitHub Actions workflow Disallowed content checker that ensures that contributions coming in via PRs are targeting the appropriate content. The workflow was duly triggered, as expected, and appropriately alerted the user that the contribution was outside of the desired target location.
We want to allow administrators of the repo to be able to maintain content across the whole repo, rather than restrict them, and we want those who are not administrators to be restricted. We do this by checking the collaborator permissions, using the Collaborators section of the GitHub API, like this:
gh api \
--jq .permission \
"/repos/$GITHUB_REPOSITORY/collaborators/$GITHUB_ACTOR/permission"
So far so good.
Shortly after opening the PR, the user closed it (by mistake, I think), and it was re-opened by my colleague and fellow Contribution Guidelines administrator Jens. As a result of this re-opening of the PR, the workflow was triggered again.
However, this time, no disallowed content alert was raised. Why was this?
Looking at the execution for that workflow run, it's clear that the steps that would have caused an alert to be issued were skipped; the skip logic looks like this:
- id: check_files_changed
name: Checks if disallowed content has been changed
if: env.repo_permission != 'admin'
uses: dorny/paths-filter@v2
with:
list-files: 'shell'
filters: |
disallowed:
- '!docs/**'
- id: comment_on_disallowed
if: steps.check_files_changed.outputs.disallowed == 'true'
...
So it would seem that the permissions for the GITHUB_ACTOR
in this subsequent execution were 'admin'. Why?
Because, as it turns out (and I confirmed this with a simple test just now) the value of GITHUB_ACTOR
is set to the user who opens -- or re-opens -- a PR. In this case it was Jens, an administrator.
This is not what I'd expected, so I thought I'd write it up and share it.
]]>Rob Pike and Brian Kernighan authored a paper in 1984 titled "Program design in the UNIX environment". In it, they explore the difference between adding features to existing programs, and achieving the same effect through connecting programs together.
It's not unusual to read about the UNIX philosophy of "small programs, loosely joined", about the power of small, single-responsibility programs doing one thing and doing it well, used together to form something greater than merely the sum of its parts.
What really jumped out at me was the thinly veiled scorn poured onto some design decisions relating to the Berkeley distribution version of the humble ls
command, design decisions ultimately based on a form of narrow thinking, rather than consideration of the UNIX environment - particularly the shell and the user space programs - as being the most important "meta tool".
I won't try to summarise what that paper says about those extensions to ls
, I would rather encourage you to go and read the paper yourself - it's only 7 pages. Instead, I'll relate a small example from today of how the paper has helped me remember the difference between focusing on a single tool to the detriment of the rest of the tools in the environment, and thinking about the entire environment as a single entity.
I have a small script, skv
which is short for "service key value", which returns a value from a property within a JSON dataset representing a (Cloud Foundry) service key. In retrospect, calling it skv
was a little short-sighted, because it would work on any JSON dataset, not just service keys. I guess that's another example of (not) thinking about the environment as the real "meta tool". But I digress.
Today I've been using skv
to grab OAuth-related values (client ID, client secret, and so on) from a JSON dataset and use them to construct URLs to follow an OAuth authorisation code flow. Some of those values are put into the query string of the URLs I'm constructing, and so I need to be careful to URL encode them.
My first reaction was to go into the source code of the skv
script to add a switch, say, -u
, along with the corresponding logic, so that I could ask, when invoking skv
, that the value be returned URL encoded.
I'm happy to say that a second after having this reaction, I felt almost horrified that I was about to do exactly what those Berkeley authors did, and add an unnecessary switch to a single program. I already have urlencode
available to me in my environment, so to get a URL encoded value from the JSON dataset, I'd just have to do something like this:
skv uaa.clientid | urlencode
This is only a trivial example, and there's a difference because here I'm just stringing together commands in Bash scripts, but I think the principle still holds here. I feel that this approach is embracing the UNIX approach described in the paper.
What's more, there was nothing stopping me encapsulating the pipeline-based use of these two simple tools (skv
and urlencode
) in a little script skvu
, that I could use to save some keystrokes:
urlencode "$(skv "$*")"
In fact here I've re-jigged how these two commands work together, as the version of urlencode
I settled on works on a value passed as an argument, rather than passed via STDIN. But the beauty of the shell means that this just means I have to express my intention in a way that echoes that approach.
Anyway, that's all I wanted to share here - it's not a crazily interesting and little-understood corner of the UNIX church that I'm discovering here, merely the delight in something resonating with me to the extent that the strong reverberations carried through to something that I was actually doing, without consciously thinking of what I'd read.
I hope you enjoy the paper as much as I did!
]]>Today I was pointed in Warp's direction on Twitter by Christian Pfisterer and Christian Drumm.
To quote Warp's website:
Warp is a blazingly fast, Rust-based terminal that makes you and your team more productive at coding and DevOps.
It's a fascinating venture, for many reasons. While the team is not looking to reinvent the entire terminal, a lot of what they describe feels "foreign" to me, as a long time terminal user (who started out on a paper-based Superterm Data Communications Terminal hooked up to a PDP-11). I've read the How it works post, which is great. Here are some random thoughts on that, and also on the Warp beta welcome video.
Warp is designed from the outset to be fast; written in Rust (a language designed in part with a major focus on performance) and using the GPU for rendering (which, to be fair, other terminal programs also do, such as my current terminal program of choice, kitty).
What struck me is how far away my brain is from the sort of speed that this team is talking about; while I guess I still about terminal speed in terms of baud, and based on characters, the measurement for Warp is in frames per second; not only that, it's in the early gaming ballpark of 60fps. It feels a little odd thinking about a terminal in those terms.
The input editor is effectively reinvented. I'm in two minds about this; moreover, one of those minds is awash with ignorance and uncertainty. First and most importantly, the editing capabilities on command lines today, at least in popular shells such as Bash and ZSH, are very advanced already. I'm not talking about what perhaps most people use - the default Emacs based editing facilities, which I think the demo is comparing Warp to - but the Vi based ones.
While I do like the idea of being able to more comfortably edit multiple lines, the other features feel rather redundant. With a simple set -o vi
I am in total control of how I edit, fix, rearrange and generally prepare my input. Very powerful.
The other mind is wondering about how this translates to remote sessions. The beauty of standard tools and shell facilities means that I can have exactly the same experience whether I'm local, or remote, via an ssh-based connection, to a machine elsewhere on the network, or to a container in a Kubernetes cluster.
Will the input editor allow this to happen in these remote contexts too? The SSH section does seem to say that this is possible, but I'm also wondering about whether that is also valid for ssh sessions within tmux panes?
Another feature of the Warp terminal, very nicely demonstrated in the video, is the concept of blocks. This makes a lot of sense to me, and I can imagine already how it will help me visually move up and down examining and working with previous commands and their output.
What worries me slightly is that it looks, at least from the demo, that I'll have to use the mouse if I want to scroll further up, via the "snackbar" (I'm not sure why it's called that, chalk another item down to my ignorance). I wonder if I'll be able to use the terminal as a terminal (yes, that's deliberately provocative) and keep my hands where they belong - on the keyboard?
Whatever the answers to these questions turn out to be (and please note that I've not seen or tried out Warp for myself yet - I've added myself to the list requesting beta access when it's available), there's one thing that I'm very happy to see.
And that's fresh thinking and energy going into what I think is one of the most misunderstood superpowers of today's computing space. So whatever this team comes up with, it's an automatic thumbs up for me. I may come to enjoy all Warp's features, or I may only like some of them. But I love the focus and brain power that's going into Warp.
Here's a picture of Warp, from the very interesting How Warp works article I mentioned earlier.
Make it so!
]]>TL;DR - My Synology DS1621+ NAS recognises the USB-connected APC SMT750IC UPS and will shut itself down on signals sent from it.
Since buying my Synology NAS DS1621+ a few weeks ago, we've had one power outage in the village. I'd been musing on the idea of getting a UPS for the NAS, and this event helped me come to a decision (a little late, perhaps, but there you go). It took me longer than it should have done to work out which UPS might be applicable and compatible. I couldn't find definitive confirmation that the UPS I was looking at was going to work with the NAS; in particular, I wanted to be as sure as I could that the USB connection would indeed be recognised by the NAS, which would receive power event signals and shut itself down as appropriate when the UPS had to switch to battery power.
Synology maintain a Compatibility List and the APC Smart-UPS SMT750IC is indeed in there, with the value "Vendor Recommended" in the "Tested by" column. Reading around, I got the impression that this indeed meant what I suspected it meant, i.e. Synology themselves hadn't tested it, but instead were relying on APC to tell them. While I had no reason to doubt APC, I am fond of the proverb ŠŠ¾Š²ŠµŃŃŠ¹, Š½Š¾ ŠæŃŠ¾Š²ŠµŃŃŠ¹ (Trust, but verify) and needed more solid evidence, especially before splashing out the Ā£300+ on the device (shipping it back might also have been a pain, due to its extreme weight).
I'd seen a few bits and pieces about the SMT750IC model's predecessor, the SMT750I, and some evidence that folks were successfully using this older SMT750I model with their Synology NAS devices, including the USB-based shutdown flows. My research told me that the "C" suffix on the newer model represented a new cloud enabled feature, described in the blurb thus: "APC SmartConnect is a proactive remote UPS cloud monitoring feature that is accessible from any internet connected device". I'd also seen some vague confirmation that alongside some minor performance improvements, this cloud feature was really the only difference.
So it would seem reasonable to assume that the SMT750IC was going to be OK. But viewing the ports on the back of each device showed me that the USB connection was different (you can also see the green-coloured ethernet port on the SMT750IC relating to its "cloud enabled" feature):
Was this USB port difference significant? It was hard to tell. Perhaps the USB port on the SMT750I was a type B for a reason? Had the USB support on the SMT750IC changed?
Further research suggested that on the one hand, if the APC "Powerchute" software was supported by the UPS, it was likely to work with the NAS, mostly because of Synology's support for the Network UPS Tools (NUT) standard. But then I read elsewhere that this standard had multiple implementations, so it wasn't a certainty by any means.
In the end, I asked on the Amazon product page, and also called their UK support centre. Both avenues resulted in a positive outcome - I got a positive reply from APC Customer Care and also from the user "Pegasus", and the person on the phone also confirmed this.
So if you're in the same situation as I was, perhaps this post will help.
Here are some screenshots of when I tested the UPS and NAS, removing the power from the UPS so that it switched to battery mode.
The UPS settings on the NAS, showing the UPS is recognised via USB.
The UPS's normal status showing via the "cloud enabled" feature on APC's website.
The UPS's front panel display shortly after I removed the power.
The alert on the NAS when the UPS has switched to battery mode.
The UPS's warning status showing via the "cloud enabled" feature on APC's website when the UPS is in battery mode.
]]>This post describes the steps I took to set up remote access to Docker running on my NAS, in the simplest and "smallest footprint" possible way I could find. There are other approaches, but this is what I did. It was a little less obvious than one might have expected, because of the way the Docker service is hosted on the NAS's operating system, and I ended up having to read around (see the reading list at the end).
Having followed the container revolution for a while, I've become more and more enamoured with the idea of disposable workspaces, services and apps that can be instantly reified and leave no trace when they're gone. This was one of the reasons I opted for a Synology NAS, my first NAS device (see Adding a drive to my Synology NAS), because it is to act not only as a storage device, but as a container server.
The Docker experience out of the box with the NAS's operating system, DiskStation Manager (DSM), is very pleasant, via a graphical user interface. I've been very happy with the way it works, especially in the initial discovery phase.
But for this old mainframe and Unix dinosaur, a command line interface with access to a myriad remote servers is a much more appealing prospect, and the separation of client and server executables in Docker plays to the strengths of such a setup. So I wanted to use my Docker command line interface (CLI) docker
to interact with the resources on the Synology NAS's Docker service. Not only for the sheer convenience, but also to be able to spin up CLIs and TUIs, as remote containers, and have seamless access to them from the comfort of my local machine's command line.
Here's what I did, starting from the Docker package already installed and running on the NAS.
From a command line perspective, this out of the box installation also gave me access to be able to run the docker
client CLI while remotely logged into the NAS, but only as root, i.e. directly, or via sudo
as shown in this example:
; ssh ds1621plus
administrator@ds1621plus:~$ sudo docker -v
Password:
Docker version 20.10.3, build b35e731
administrator@ds1621plus:~$ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
homeassistant/home-assistant latest 832ca33fe14a 4 weeks ago 1.1GB
linuxserver/freshrss latest 09ffc08f14fe 4 weeks ago 120MB
administrator@ds1621plus:~$
The first thing I wanted to do is to allow myself to run the docker
CLI as a non-root user; in my case (as in many basic Synology NAS contexts) this is the as the administrator
user.
In the standard Docker Post-installation steps for Linux, there's a specific section for this: Manage Docker as a non-root user. However, due to the way that users and groups are managed in DSM, this specific approach didn't work; there was no docker
group that had been created, to which the administrator
user could be added, and manually adding the group wasn't the right approach either, not least because DSM doesn't sport a groupadd
command.
In fact, there are DSM specific commands for managing local users, groups, network settings and more. They all begin syno
and are described in the CLI Administrator Guide for Synology NAS.
So here's what I did. I'm a check-before-and-after kind of person, so some of these steps aren't essential, but they helped me to go carefully.
First, I wanted to check that I wasn't about to clobber any existing docker
group:
administrator@ds1621plus:~$ grep -i docker /etc/group
administrator@ds1621plus:~$
Nope, no existing docker
group, at least in the regular place.
Time to create the group then, using the DSM specific command; I specified the administrator
user to be added directly, as I did it:
administrator@ds1621plus:~$ sudo synogroup --add docker administrator
Group Name: [docker]
Group Type: [AUTH_LOCAL]
Group ID: [65538]
Group Members:
0:[administrator]
Checking to see if the group was now listed in /etc/group
confirmed that these DSM specific commands weren't doing anything out of the ordinary:
administrator@ds1621plus:~$ grep -i docker /etc/group
docker:x:65538:administrator
Great, the docker
group now exists, with administrator
as a member.
The Manage Docker as a non-root user steps mentioned earlier showed that this is pretty much all one needs to do on a standard Docker-on-Linux install. However, there was an extra step needed on DSM, to actually assign to this new docker
group access to the Unix socket that Docker uses.
Before I did this, I wanted to see what the standard situation was:
administrator@ds1621plus:~$ ls -l /var/run/ | grep docker
drwx------ 8 root root 200 Jun 10 17:40 docker
-rw-r--r-- 1 root root 5 Jun 10 17:40 docker.pid
srw-rw---- 1 root root 0 Jun 10 17:40 docker.sock
The socket (docker.sock
) in /var/run/
was owned by root
as user and root
as group. This meant that no amount of membership of the docker
group was going to get the administrator
user any closer to being able to interact with Docker.
So I changed the group ownership to docker
:
administrator@ds1621plus:~$ sudo chown root:docker /var/run/docker.sock
administrator@ds1621plus:~$ ls -l /var/run/ | grep docker
drwx------ 8 root root 200 Jun 10 17:40 docker
-rw-r--r-- 1 root root 5 Jun 10 17:40 docker.pid
srw-rw---- 1 root docker 0 Jun 10 17:40 docker.sock
Now for the big moment. I logged out and back in again (for the new group membership to take effect) and tried a docker
command:
administrator@ds1621plus:~$ logout
Connection to ds1621plus closed.
# ~
; ssh ds1621plus
administrator@ds1621plus:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
homeassistant/home-assistant latest 832ca33fe14a 3 weeks ago 1.1GB
linuxserver/freshrss latest 09ffc08f14fe 4 weeks ago 120MB
Success!
Now that I was able to safely interact with Docker on the NAS, I turned my attention to doing that remotely.
Elsewhere in the Docker documentation, there's Protect the Docker daemon socket which has tips on using either SSH or TLS to do so. I'd already established public key based SSH access from my local machine to the NAS, and maintain SSH configuration for various hosts (which you can see in my dotfiles). So the SSH route was appealing to me.
The idea of this SSH access is to connect to the remote Docker service via ssh
and run docker
like that, remotely.
However, trying a basic connection failed at first; running a simple ssh
-based invocation of docker
on the remote machine (ssh ds1621plus docker -v
) resulted in an error that ended like this:
"Exit status 127, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=sh: docker: command not found"
In desperation I even tried explicit values (ssh -l administrator -p 2222 ds1621plus docker -v
) but got the same message.
It turns out that on SSH access, the environment variables are not set the same as when you connect via ssh
for an actual login session. Crucially, the value of the PATH
environment variable was rather limited. Here's the entirety of the environment on an ssh
based invocation of env
:
; ssh ds1621plus env
SHELL=/bin/sh
SSH_CLIENT=192.168.86.50 54644 2222
USER=administrator
MAIL=/var/mail/administrator
PATH=/usr/bin:/bin:/usr/sbin:/sbin
PWD=/volume1/homes/administrator
SHLVL=1
HOME=/var/services/homes/administrator
LOGNAME=administrator
SSH_CONNECTION=192.168.86.50 54644 192.168.86.155 2222
_=/usr/bin/env
We can see that there are only four directories in the PATH
: /usr/bin
, /bin
, /usr/sbin
and /sbin
.
On the NAS, the docker
client executable was in /usr/local/bin
, not in the PATH
; this was the cause of the error above - via a simple ssh
invocation, the docker
command wasn't found.
So I had to address this, and I did via SSH's "user environment" feature.
SSH and its implementation, on client and server, is extremely accomplished, which is code for "there's so much about SSH I don't yet know". One thing I learned about in this mini adventure is that the SSH daemon has support for "user environments", via the .ssh/environment
file, which is described in the FILES section of the sshd documentation.
Basically, setting the PATH
to include /usr/local/bin
, via this support for user environments, was exactly what I needed. What's more, I was not having to "hack" anything on the NAS (such as copying or symbolic-linking docker
to another place so that it would be accessible) that I might regret later.
First, though, I needed to turn on user environment support on the SSH daemon service on the NAS. For this, I uncommented PermitUserEnvironment
in /etc/ssh/sshd_config
and set the value to PATH
, with this result:
administrator@ds1621plus:~$ sudo grep PermitUserEnvironment /etc/ssh/sshd_config
PermitUserEnvironment PATH
I'd originally set this value to
all
but since learned that you can restrict the setting to the environment variable(s) that you want, i.e.PATH
in this case.
I then restarted the NAS; I could have messed around finding a neater way just to restart the SSH daemon, but I'd read about some other gotchas doing this, and I was the only one using the NAS at the time, so I went for it.
Now I could use the .ssh/environment
file in the administrator
user's home directory to set the value of the PATH
environment variable to what I needed.
To do this, I just started a remote login session on the NAS via ssh
, and asked env
to tell me what this was and also write it to the .ssh/environment
file directly:
; ssh ds1621plus
administrator@ds1621plus:~$ env | grep PATH | tee .ssh/environment
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin
administrator@ds1621plus:~$
And that was it; when running commands remotely via ssh
, this PATH
value was now applicable. So the remote invocation of docker
now worked:
; ssh ds1621plus docker -v
Docker version 20.10.3, build b35e731
This final step was just for convenience, but worth it. With a context, I can avoid having to use ssh
explicitly to interact with Docker on the NAS remotely.
It's described in Use SSH to protect the Docker daemon socket mentioned earlier, so I'll just show here what I did.
Create the context:
; docker context create \
--docker host=ssh://administrator@ds1621plus \
--description="Synology NAS" \
synology
List the contexts, and select the new synology
context for use:
; docker context list
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://api.c-681fdc3.kyma.shoot.live.k8s-hana.ondemand.com (default) swarm
synology moby Synology NAS ssh://administrator@ds1621plus
# ~
; docker context use synology
synology
# ~
; docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
homeassistant/home-assistant latest 832ca33fe14a 4 weeks ago 1.1GB
linuxserver/freshrss latest 09ffc08f14fe 4 weeks ago 120MB
Note that last command docker image ls
; I invoked that on my client machine, but because of the context set, and the SSH based connection set up, the target was the Docker engine running on the Synology NAS. Success!
Here's what I read to find my way through this. Documents referenced in this post are also included here.
]]>Earlier this month I took delivery of my first Network Attached Storage (NAS) device - a Synology DS1621+. It has 6 drive bays. Note that you can sort of tell this from the model number:
Synology also offer an expansion unit, a DX517 which has 5 drive bays, and you can attach two of them to the DS1621+ adding up to a total of 6 + (5 + 5) = 16.
Thanks to Frank for answering all my early questions on Synology NAS systems.
I bought two Seagate IronWolf 4TB drives. I knew I wanted to go for the Synology Hybrid Raid (SHR) disk arrangement (this has many advantages that appealed to me, not least the ability to add different sized drives in the future).
SHR requires at least two drives, which is why two was the minimum purchase that made sense. But I also bought a couple more, a week later. It's amazing how cheap, relatively speaking, spinning disk storage has become.
With the initial two drives, I'd set up a storage pool, following the instructions (it was pretty straightforward). Here's what the status of that storage pool looked like:
There are a few things that are worth noting here:
Now the second two drives have arrived, I decided to add one of them to the existing storage pool. I'm thinking of using the second one as a "Hot Spare" and seeing how that goes, but that's for another time.
So I added it to the caddy:
I checked the specifications of the DS1621+ and noted that it supported hot swapping, so I could insert the drive and caddy back in as the device was running, which I did:
A few moments later, I re-checked the storage manager and it showed me the new drive:
Here's the newly inserted drive in a "Not initialized" state in the HDD/SDD list:
Now the drive was known, I could add it to the storage pool. I did this with the "Action -> Add drive" menu item in the storage pool window, and the flow was fairly predictable, starting with the drive selection:
After a warning about any data being erased on the new drive, I was presented with a summary before proceeding:
The result was as expected. The storage pool had this new drive listed, and went into an "Expanding" status (note the capacity is not yet shown as being increased):
Checking back over in the "HDD/SDD" display, the drive status has gone from "Not initialized" to "Healthy", and is showing an assignment to Storage Pool 1.
That was pretty much it - it wasn't an unexpected flow, but I was curious as to what would happen and how it would happen. Perhaps this helps someone who is also wondering. I've been writing this as I've been working through the flow, and now I've come to the end of this post, the status is still "Expanding" and shows that it's still less than 1% through a check of parity consistency - so it has a long way to go yet. I think I now realise why a "Hot Spare" might be useful. Anyway, I'll bring this post to an end now, and update it when the status changes.
Update: 12 hours later, it's still at it - the status of the storage pool is "Expanding (Checking parity consistency 47.33%)". Some way to go yet.
Further update: It's the next morning, and the storage pool is now showing "Healthy" again, and its new expanded state of 7.27TB:
Further reading:
]]>This should go without saying, but alas, we're not in an ideal world. Equality in tech should be the backbone, the basis, upon which we run our industry. But it's not.
I interact a lot on social media, I live stream too. And I haven't had a single occasion where I've been harrassed in any way. I'd like to think that this is because everyone is spellbound by what I have to say and what I'm showing. But it's not. It's because I'm male.
I've had the pleasure of watching some awesome folks streaming on Twitch and YouTube, and have witnessed them being harrassed. And guess what, all of the targets, on all of the occasions, are female. This is not a coincidence.
To those people thinking it's OK to make inappropriate comments, or worse, I say this: What is WRONG with you morons? It's not OK. Very not OK.
To those who already get it, great. Perhaps the next step is to think about loading time in favour of helping and encouraging girls and women in tech. I've been very lucky to have been able to do this in a teaching capacity over the years, especially with youngsters (see reading links below). But even simpler is to just help female tech folks level up by supporting them on social media, helping them to grow and be the role models for the next generation too. And also, I've suddenly grokked it that being vocal about this also helps.
I support equality in tech, and so should you.
Further reading:
]]>There are many folks that I observe giving to the community. This giving takes many forms, such as providing software in an open source manner, supporting that software, sharing knowledge, and mentoring. I wanted to look into how I could provide a bit of support. I give to charity as part of my remuneration scheme, and I'm very fortunate to be able to do that. But that seems more of a "given" and not particularly specific, nor do I have any direct connection to the recipients.
There are various ways to support individuals online - I've used the "buy me a coffee" approach, I've sent small amounts via PayPal to folks to say thanks (e.g. for the Victor Mono font), subscribed to folks on Twitch, gifted subscriptions, and so on. These are all avenues available to us, and I'd encourage you to look into them.
But there's an avenue that resonates quite well with me, one that was introduced to me by Alex Ellis. And that's GitHub Sponsors. Subjective, I know, but I feel that sponsoring someone at this layer is a useful thing to do. The facilities offered by this mechanism also allow the sponsor relationship to be on a automatic and regular basis too.
I've no idea how far I'll go yet, I'm just really starting. So far I'm sponsoring Alex for his work on Kubernetes, small machines and everything in between, and have also sponsored Vidar Holen, mostly for shellcheck, which has been a key part of how I'm trying to improve my shell scripting. I've just started sponsoring Rob Muhlestein for everything that he shares and for his long term efforts to share knowledge with junior developers on his Twitch live streams.
My contributions are minimal, but this is a scale thing - I would like to encourage you to consider doing the same and sponsoring someone for their work that collectively helps strengthen our community.
Yesterday I wrote up some initial notes on my foray into Markdown linting. Today I continue my journey of learning and discovery by attempting to get the Markdown linting working in a GitHub Action workflow, so I can have the checks done on pull requests.
Beyond creating the workflow definition itself, there are only a few parts to getting Markdown content linted in the context of a pull request:
markdownlint
tool and any custom rule packagesSince being able to quickly look at previous examples of GitHub Actions workflow definitions using my workflow browser, it was quite easy to create a simple workflow definition. Here's what the start looks like:
name: Markdown checks
on:
pull_request:
branches: [main, master]
jobs:
lint-markdown:
runs-on: ubuntu-20.04
steps:
...
I've moved from specifying
ubuntu-latest
toubuntu-nn.nn
for a more stable (or perhaps "predictable") runner experience.
Nothing exciting in this workflow definition so far; I've included both main
and master
in the list of branches because I've been testing with an older repository that still has master
as the default branch.
To run markdownlint
on the content of the pull request, we need that in the runner workspace, and the usual use of the standard actions/checkout action does the job here:
- uses: actions/checkout@v2
While the whole process will work without this step, it provides an extra level of comfort for those involved in the pull request review.
The linting is performed in the runner, and the output (from markdownlint
) is available in the workflow execution detail:
However, there's a small disconnect between the place of change and discussion (the pull request) and this workflow output.
There's a special, slightly mysterious feature that can help address this disconnection. This is the "matcher" feature, and is mysterious in that it's not particularly prominent in the main GitHub Actions documentation ... although it is explained in the Actions Toolkit documentation, specifically in the ::Commands section.
The general idea is that matchers can be added to a workflow execution. Matchers take the form of configuration that uses a regular expression to pick out parts of output messages and work out which bits are what. In other words, work out which file, line number and column each message applies to, as well as the message code and text.
This is what a matcher looks like, and it's the one I'm using to match the markdownlint
output:
{
"problemMatcher": [
{
"owner": "markdownlint",
"pattern": [
{
"regexp": "^([^:]*):(\\d+):?(\\d+)?\\s([\\w-\\/]*)\\s(.*)$",
"file": 1,
"line": 2,
"column": 3,
"code": 4,
"message": 5
}
]
}
]
}
The regular expression actually appears slightly more complex than it is, because the backslashes that are used to introduce the metacharacters
\d
(digit),\s
(whitespace) and\w
(alphanumeric) are escaped with backslashes in the JSON string value (so e.g.\d
becomes\\d
). This is so they don't get interpreted as escape characters themselves.
If we stare at the output earlier, we see this:
docs/b.md:5 MD022/blanks-around-headings/blanks-around-headers Headings should be surrounded by blank lines [Expected: 1; Actual: 0; Below] [Context: "### Something Else"]
docs/b.md:6 MD032/blanks-around-lists Lists should be surrounded by blank lines [Context: "- one"]
docs/b.md:10:10 MD011/no-reversed-links Reversed link syntax [(reversed)[]]
Applying the regular expression, we can see that it will indeed pick out the values as desired. Taking the last message line as an example, we get:
Regular expression part | Matched text | Value for |
---|---|---|
^ |
(start of line) | |
([^:]*) |
docs/b.md |
file |
: |
: |
|
(\d+) |
10 |
line |
:? |
: |
|
(\d+)? |
10 |
column |
\s |
(a space) | |
([\w-\/]*) |
MD011/no-reversed-links |
code |
\s |
(a space) | |
(.*) |
Reversed link syntax [...] |
message |
$ |
(end of line) |
In this table, the escaping backslashes have been removed, as they're only there to make the JSON string happy.
The result of having a matcher like this is that as well as having the messages available in the workflow execution detail, we get the messages in context too, which is far more comfortable. They appear in the workflow execution summary, like this (see the "Annotations" section):
Moreover, each message appears directly below the line to which it applies, like this:
To get this to work, the matcher configuration needs to be added with the add-matcher
directive, in a step, like this:
- run: "echo ::add-matcher::.qmacro/workflows/markdownlint/problem-matcher.json"
There is actually a GitHub Action, xt0rted/markdownlint-problem-matcher that does this for you, but I'm still in two minds as to whether to use a "black box" action or something direct for things like this. Only time will tell.
Next, it's time to install the actual markdownlint
tool, along with the custom rule package I mentioned in part 1. While I installed markdownlint
on my macOS machine with brew
, it seems fine here to install it with npm
, along with the rule too:
- run: |
npm install \
--no-package-lock \
--no-save \
markdownlint-cli markdownlint-rule-titlecase
Using the --no-package-lock
and --no-save
options makes for a slightly cleaner environment, given what we're doing here (i.e. we are only really interested in NPM metadata for this current job's execution).
Now everything is ready, we can run the linter. I am invoking the markdownlint
tool, just installed with npm
, using the npx
package runner as it seems the cleanest way to do it:
- run: |
npx markdownlint \
--config .qmacro/workflows/markdownlint/config.yaml \
--rules markdownlint-rule-titlecase \
docs/
Without configuration, markdownlint
will apply all the rules by default. I don't want that to happen, so I've used the --config
option to point to a rules file .qmacro/workflows/markdownlint/config.yaml
. This is what's in that file:
# All rules are inactive by default.
default: false
# These specific rules are active.
# See https://github.com/DavidAnson/markdownlint#rules--aliases for details.
heading-increment: true
no-reversed-links: true
no-missing-space-atx: true
no-multiple-space-atx: true
blanks-around-headings: true
blanks-around-lists: true
no-alt-text: true
In other words, with this configuration, only those rules in that second stanza will be applied. Plus of course the explicit NPM package based title-case rule I've specified with the --rules
option.
I've been thinking about where to store workflow related artifacts in a repository. I don't want to use
.github/workflows
for anything other than actual workflow definition files. So right now, I'm thinking along the lines of a hidden user/organisation based directory name --.qmacro
in this example -- to parallel.github
.
The final thing to note in this invocation is that I'm passing a specific directory to be linted: docs/
. This means only content there will be linted. I will probably want some sort of .markdownlintignore
file at some stage, but for now this will do.
Here's the workflow definition in its entirety:
name: Markdown checks
on:
pull_request:
branches: [main, master]
jobs:
main:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- run: "echo ::add-matcher::.qmacro/workflows/markdownlint/problem-matcher.json"
- run: |
npm install \
--no-package-lock \
--no-save \
markdownlint-cli markdownlint-rule-titlecase
- run: |
npx markdownlint \
--config .qmacro/workflows/markdownlint/config.yaml \
--rules markdownlint-rule-titlecase \
docs/
Everything works nicely, and I'm happy with the local and remote linting process.
]]>Thanks to some great direction and enlightenment from my colleague Tobias, I found myself getting my brain around Markdown linting. Of course, not what it is, but what the current possibilities are and how they might apply to my situation. I thought I'd write some notes on what I found (mostly for my future self).
(See also Notes on Markdown linting - part 2 where I learn how to get Markdown linting working in GitHub Actions).
The Node.js-based DavidAnson/markdownlint is the linter of choice. I'll refer to it as markdownlint
in this post.
There's a related project markdownlint/markdownlint which seems to be another, very similar linter written in Ruby. I'll refer to this as mdl
as that's what the executable is called.
They both seem to share the same rule definitions which is good; although mdl
seems to have rules that have been deprecated in markdownlint
.
I went for markdownlint
for a number of reasons:
brew
(see later)mdl
involved RubyGems which I've never got on withmarkdownlint
in various editors (including in Vim)markdownlint
supportsMarkdownlint can be installed via npm install
or via brew
. The brew
option is actually via a connected project igorshubovych/markdownlint-cli. I ran the brew install markdownlint-cli
command and was up and running pretty much immediately:
# ~/Projects/gh/github.com/qmacro/qmacro.github.io (markdownlint-post *=)
; markdownlint
Usage: markdownlint [options] <files|directories|globs>
MarkdownLint Command Line Interface
Options:
-V, --version output the version number
-c, --config [configFile] configuration file (JSON, JSONC, JS, or YAML)
-d, --dot include files/folders with a dot (for example `.github`)
-f, --fix fix basic errors (does not work with STDIN)
-i, --ignore [file|directory|glob] file(s) to ignore/exclude (default: [])
-o, --output [outputFile] write issues to file (no console)
-p, --ignore-path [file] path to file with ignore pattern(s)
-r, --rules [file|directory|glob|package] custom rule files (default: [])
-s, --stdin read from STDIN (does not work with files)
-h, --help display help for command
From the options we can see that it works in the way we'd expect - point it at one or more files, optionally give it some configuration, and go.
But we can also see that it allows the use of custom rules. The custom rule that Tobias wanted to use was one that checks for title case (and I still went ahead, despite the fact that I dislike title case intensely :-)). The custom rules can be supplied in different forms as we can see from what can be specified with the --rules
option; this particular one was of the exotic variety, i.e. an NPM package: markdownlint-rule-titlecase. In fact, there's a grouping of NPM packages that are custom rules for markdownlint
, organised via the markdownlint-rule keyword.
As I mentioned earlier, there is a list of references to mechanisms where you can use markdownlint
from the comfort of your editor. This list pointed to fannheyward/coc-markdownlint for Vim.
I don't use Conqueror of Completion (coc) - but I do use the Asynchronous Linting Engine (ALE), which has built-in support for markdownlint
. Within 5 minutes and a few tweaks to my ALE related Vim configuration I was up and running. I have to tweak the rule configuration to my liking, as right now, even as I write this post, I'm being given grief by markdownlint
about overly long lines.
Configuration for markdownlint
can be supplied with the --config
option, or by configuration files in the right place - either in the current directory or in one's home directory.
I added the following to ~/.markdownlintrc
, and the grief about line length went away:
{
"line-length": false
}
I then wanted to see if I could get the custom linting rule working, at least in a basic way. On the NPM page for markdownlint-rule-titlecase it says:
Once installed using npm install markdownlint-rule-titlecase, run markdownlint with --rules "markdownlint-rule-titlecase".
Sounds fair, although a little worrying for me as I'm not going to be working with Markdown content in the context of a Node.js project any time soon. However, it turns out that I can still install the package and use it, even in a non-Node.js project directory:
# ~/Projects/gh/github.com/qmacro/qmacro.github.io (markdownlint-post *=)
; npm i --no-package-lock markdownlint-rule-titlecase
npm WARN saveError ENOENT: no such file or directory, open '/Users/dj/Projects/gh/github.com/qmacro/qmacro.github.io/package.json'
npm WARN enoent ENOENT: no such file or directory, open '/Users/dj/Projects/gh/github.com/qmacro/qmacro.github.io/package.json'
npm WARN qmacro.github.io No description
npm WARN qmacro.github.io No repository field.
npm WARN qmacro.github.io No README data
npm WARN qmacro.github.io No license field.
+ markdownlint-rule-titlecase@0.1.0
added 4 packages from 4 contributors and audited 4 packages in 0.838s
found 0 vulnerabilities
The warnings are fair - there isn't a package.json
file of course, why would there be?
I do now have a smallish node_modules/
directory, though - containing the custom rule package:
# ~/Projects/gh/github.com/qmacro/qmacro.github.io (markdownlint-post *%=)
; tree -d node_modules/
node_modules/
āāā markdownlint-rule-helpers
āāā markdownlint-rule-titlecase
āāā title-case
āĀ Ā āāā dist
āĀ Ā āāā dist.es2015
āāā tslib
āāā modules
7 directories
Oh well, I guess I could delete it when I'm done. In the meantime, can I take this new custom rule for a spin?
# ~/Projects/gh/github.com/qmacro/qmacro.github.io (markdownlint-post *%=)
; markdownlint --rules markdownlint-rule-titlecase _posts/2021-05-13-notes-on-markdown-linting.markdown
_posts/2021-05-13-notes-on-markdown-linting.markdown:11:1 titlecase-rule Titlecase rule [Title Case: 'Expected ## Which Linter?, found ## Which linter?']
_posts/2021-05-13-notes-on-markdown-linting.markdown:27:1 titlecase-rule Titlecase rule [Title Case: 'Expected ## Installing Markdownlint, found ## Installing markdownlint']
_posts/2021-05-13-notes-on-markdown-linting.markdown:55:1 titlecase-rule Titlecase rule [Title Case: 'Expected ## Using Markdownlint with Vim, found ## Using markdownlint with Vim']
_posts/2021-05-13-notes-on-markdown-linting.markdown:63:1 titlecase-rule Titlecase rule [Title Case: 'Expected ## Configuring Markdownlint, found ## Configuring markdownlint']
_posts/2021-05-13-notes-on-markdown-linting.markdown:75:1 titlecase-rule Titlecase rule [Title Case: 'Expected ## Trying a Custom Rule, found ## Trying a custom rule']
Yes! Works nicely. Although like I say, I'm not sure why anyone would want to use such a rule ... I may write one that complains if you do use title case. But I digress.
I think I'd like to be able to run these custom rules in Vim too, but I'll leave that for another time. I'm satisfied at least at this stage to be able to lint my Markdown files at all. And the next thing is actually to be able to use markdownlint
in a GitHub Actions workflow.
Update: I've written that up in part 2.
]]>I've noticed over the years that occasionally the rendered version of my markdown content, in particular on GitHub (which is where most of my markdown content ends up), sometimes contains unrendered headings. Here's an example:
The second level 2 heading "Another heading level 2" remains unrendered, even though everything looks fine. Why? This has bugged me for a while, but not so much as to make me stop and work out why it was happening. When it happened, I'd just go into the markdown source, rewrite the heading line, and all was fine.
Today I finally stopped to spend a bit of time to look into it. Turns out it's quite simple and obvious now I know what was causing it.
The basic syntax for headings involves one or more hashes (depending on the heading level needed) followed by the heading text. There's a space that should separate the hashes and the heading text. Here's an example:
## Heading level 2
What's causing that heading above not to be rendered properly? Well, it's the space. To you and me there is indeed a space between ##
and Another heading level 2
.
But it's the wrong type of space.
Checking first that it's not something weird going on with the markdown renderer on GitHub, let's try a different rendering, in the terminal, with the excellent glow tool:
Same issue.
So let's dig in a little deeper, and look at the source.
First, let's look at the first level 2 heading, which has been rendered correctly:
# ~/Projects/gh/github.com/qmacro-org/test (main=)
; grep 'Heading level 2' README.md | od -t x1 -c
0000000 23 23 20 48 65 61 64 69 6e 67 20 6c 65 76 65 6c
# # H e a d i n g l e v e l
0000020 20 32 0a
2 \n
0000023
# ~/Projects/gh/github.com/qmacro-org/test (main=)
;
Seems OK, and yes, there's the space, hex value 20
, following the two hashes (hex values 23
).
Now let's look at the second level 2 heading, which has not been correctly rendered;
# ~/Projects/gh/github.com/qmacro-org/test (main=)
; grep 'Another heading level 2' README.md | od -t x1 -c
0000000 23 23 c2 a0 41 6e 6f 74 68 65 72 20 68 65 61 64
# # 302 240 A n o t h e r h e a d
0000020 69 6e 67 20 6c 65 76 65 6c 20 32 0a
i n g l e v e l 2 \n
0000034
# ~/Projects/gh/github.com/qmacro-org/test (main=)
;
What the heck is that following the two hex 23
hash characters?
0000000 23 23 c2 a0
# # 302 240
Turns out it's a non-breaking space character. And its UTF-8 encoding, which is what the markdown file has, is c2 a0
.
So this second level 2 heading cannot be rendered as such, as the markdown cannot be recognised. Makes sense!
But where are these non-breaking space coming from? How do they get there?
Well, my daily driver during the working week is a macOS device, where it's notoriously more difficult that it should be to type a #
character. One has to use Option-3
(or Alt-3
) to get it. And it turns out that after holding down Option
to hit 3
a couple of times to introduce the ##
for a level 2 heading, my thumb is sometimes still on the Option
key when I hit space
.
And guess what - Option-space
is how you type a non-breaking space on macOS!
So basically it's me that's been causing this issue - by inadvertently inserting not a space but a non-breaking space after the #
characters introducing markdown headings.
I don't know about you, but I find value in staring at other people's shell activities, so I thought I'd share what occurred to me as I did so on this occasion, in case it helps newcomers become a little more acquainted.
A colleague wanted to find out something about the pull request ID when a workflow was triggered. This is a shortened version of what was shared:
- name: PR ID
run: |
IFS='/' read -r OWNER REPOSITORY <<< "$GITHUB_REPOSITORY"
HEADREFNAME=$(echo ${{ github.event.ref }} | awk -F'/' '{print $NF}')
PR_ID=$(curl -s -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-X POST \
-d "{\"query\": ... }" \
"$GITHUB_GRAPHQL_URL" \
| jq '.data.repository.pullRequests.nodes[].number' \
)
shell: bash
I've omitted the detail of the API call being made with curl
, partly because it's not relevant, and partly because it's a GraphQL call and extremely ugly.
So what can we learn from this? Let's take it line by line.
IFS='/' read -r OWNER REPOSITORY <<< "$GITHUB_REPOSITORY"
This is a nice way of splitting the value in a variable into a couple of variables. What's in $GITHUB_REPOSITORY
? The Default environment variables documentation tells us that it's going to be the repository owner and name, joined with a /
character, e.g. octocat/Hello-World
.
Let's pick this line apart.
The first thing we see is IFS='/'
. IFS
is an environment variable in Bash and stands for Input Field Separators (or Internal Field Separators). Notice that "separators" is plural. Note also that some folks like to think of them as delimiters, rather than separators, but that's a debate for another time. The default value for the IFS
environment variable is the list of different whitespace types, i.e. space, tab and newline.
Here, we only want to split on /
characters, rather than on any whitespace characters.
There are a number of places that IFS
is used in the context of the shell. One of these places is with the read
command, and in particular, it comes into play when there are multiple variable names specified. But we'll get to that shortly.
The other thing to note is that the setting of the value for IFS
is done "in the same breath" as the read
command, on the same line. This means that the value assigned is temporary, just for the duration of the command or builtin that follows. What actually happens is that the IFS='/'
assignment is passed as part of the environment within which the command or builtin is executed. (I found this explanation on StackOverflow very helpful in understanding this).
This means, in turn, that when (in this case) read
consults the value of IFS
it gets the /
character, and not whatever IFS
was set or defaulted to before that incantation. But once the processing of whatever is on that line is finished, that temporary, execution-environment-specific assignment is done with, and effectively we're back with whatever IFS
was before we started.
Next we have the actual execution of the read
builtin: read -r OWNER REPOSITORY
.
In case you're wondering, "builtin" just means that
read
is part of the Bash shell itself, rather than a separate executable. One implication of this is that the execution ofread
is going to be faster (although unless you're running it many times in a loop, or on a very slow machine, the difference is going to be almost imperceptible). Another implication is that you'll want to useread --help
to find out whatread
does, rather thanman read
.
Looking at what read --help
tells us, we see that it reads a line from STDIN and splits it into fields. Note the phase "a line" - it only reads one line, so if you have multiple lines, you'll need to execute read
in a loop (a common idiom is to use a while
loop here). Next, then, is the -r
option, which prevents any backslashes from escaping characters. Often with input you'll find control characters, such as tab or newline, written in an escaped form, i.e. \t
and \n
respectively. In this instance, this is not desired - any actual backslash should be interpreted directly as such.
Knowing that the value in $GITHUB_REPOSITORY
is going to be an owner and a repository name, stuck together with a /
character (such as "octocat/Hello-World") we can understand what the variable names OWNER
and REPOSITORY
are likely to receive, given the temporary assignment of /
to IFS
.
But we know read
reads lines from STDIN. So how do we get it to read the value of a variable ($GITHUB_REPOSITORY
) instead? We get it to do that using a "here string" - and that's the last bit of the line that we should now stare at for a second, the <<< "$GITHUB_REPOSITORY"
part.
To understand what a "here string" is, let's take a few steps back, starting at the concept of STDIN ("standard input"). In the context of the shell, this is often what is supplied to a program in a pipeline, like this:
$ producer | consumer
Whatever producer
emits to STDOUT, that's what consumer
receives on STDIN.
There are other ways to supply data to consumer
. One way is to use "redirection", which is useful if you want to use files:
$ producer > some-file
$ consumer < some-file
The first line uses "output redirection", i.e. the output that producer
emits to STDOUT is redirected to some-file
. The second line uses "input redirection", where some-file
is opened for reading on consumer
's STDIN.
There's another type of redirection, called a "here document", which allows us to specify input lines directly, i.e. "here", like this:
$ consumer <<EOF
first line of input
second line of input
last line of input
EOF
The three lines of input are what are supplied to consumer
's STDIN. The string EOF
is declared as a delimiter, and all lines up until that delimiter is seen are taken as input.
And there's a variation on such "here document", and that's a "here string", which is what we have in our example. While regular STDIN redirection is introduced with a single <
, and a "here document"-based redirection is introduced with a double <<
, a "here string" is introduced with a triple <<<
, and takes whatever is supplied, appends a single newline and passes that to STDIN.
In this case, a variable $GITHUB_REPOSITORY
is supplied, so that is expanded to the value it contains, and passed to read
's STDIN.
The second line is also interesting and deserves a little attention. It's a single assignment statement, assigning a value to the variable HEADREFNAME
. It doesn't matter too much what this is, but it doesn't hurt to make a guess. Based on the context in which this will run, i.e. in a pull request event, and the reference to the GitHub event property "ref" (in github.event.ref
), we can see from the Webhook events and payloads section of the documentation that this is likely to be something that looks like this:
refs/head/main
Let's stare at this line to see what it does and how it works:
HEADREFNAME=$(echo ${{ github.event.ref }} | awk -F'/' '{print $NF}')
We can see that what is assigned to the HEADREFNAME
variable is something inside this construct: $(...)
. This is the command substitution construct. This allows the output of a command to be substituted in-place. In other words, whatever the output of what's expressed within the $(...)
construct is substituted, and (in this case) assigned as the value to HEADREFNAME
.
You may see an alternative command substitution construct in this form:
`...`
; this is the older style of the construct, but the newer$(...)
style is preferred due to some quoting complexities with the older style.
So what is the command that is producing the output that will be substituted and assigned to the HEADREFNAME
variable here? Let's have a look:
echo ${{ github.event.ref }} | awk -F'/' '{print $NF}'
Remember that the definition context here is a GitHub Actions workflow definition. This is where the ${{ ... }}
comes from - it's not a shell expression; rather, it's an expression in the workflow definition format. It basically means that the value of the property github.event.ref
is substituted; this is before the line is executed by Bash.
Assuming for now that the value of github.event.ref
is indeed refs/head/main
, this amounts to:
echo refs/head/main | awk -F'/' '{print $NF}'
So the value is piped into the STDIN of awk
, the venerable and still useful tool for text processing, data extraction and reporting. And it is here that data extraction is taking place. Let's break down how it works.
The structure of an awk
script is one or more "condition action" pairs. The basic idea is that awk
processes lines that it receives via STDIN, and for each line, applies the condition, and if the condition is true, executes the corresponding action. Conditions are often regular expressions, and there's the special (and common) case of "no condition", in which case the action is executed regardless. (There are also the special BEGIN
and END
conditions which can be used for pre- and post-processing respectively).
Actions are enclosed in curly braces { ... }
.
For quick one-liners, awk
scripts are often expressed "in-line" like we see here. In other more complex cases they're stored in separate script files - you can see a couple of examples of .awk
file contents in the graphing directory within the SAP samples repository cloud-messaging-handsonsapdev.
This particular one-liner looks like this:
(no condition) { print $NF }
In other words, the action will be executed for every line coming in on STDIN. Considering that there's only going to be one line coming in (the refs/head/main
string), that's just a single instance of that action. But what does it do? To understand that, we have to look at $NF
and, in turn, the value '/'
passed to the -F
option in the awk
invocation.
There are a number of built-in variables in awk
, and NF
is one that represents the number of fields.
What does "number of fields" mean, exactly? Well, first, it's the number of fields in the input line currently being processed. And the number of fields is determined by the value of the FS
built-in variable - the "field separator". The default value of FS
is whitespace, but this can be changed using the -F
option, which is what's happening here.
With that knowledge, we can guess what this might produce (note the addition of FS
and the deliberate omission for now of the $
prefix to NF
):
echo refs/head/main | awk -F'/' '{print FS, NF}'
It produces this, i.e. the value of the field separator and the number of fields.
/ 3
Fields in an awk
script can be referred to positionally with $1
, $2
, $3
and so on. But usefully, with a touch of indirection, we can prefix NF
with $
to refer to fields relatively, such that $NF
, which resolves to $3
, is the last field in this input, $NF-1
is the second to last, and so on.
So the action { print $NF }
just prints the last field on the line.
In other words, what this entire line does is assign whatever the last part of the value of github.event.ref
(i.e. main
, here) to the HEADREFNAME
variable.
And that's it. While there's more in this workflow definition step, I'll stop here to let you take things in. Hopefully if you're taking some tentative steps towards embracing more terminal based command of your working environment, this has helped break down the barriers a little to the syntax and use of Bash shell expressions and scripts.
]]>With a programming or definition language, especially one that's new and powerful, it takes me a while to become comfortable writing scripts or definitions from scratch. I have a small amount of auto completion in my editor, but I'm not a fan - I prefer to learn by looking things up and then typing them in, rather than have words automatically completed for me.
The YAML based syntax for definining GitHub Actions workflows is powerful and there are different ways of achieving similar things. And it's new to me too (although defining jobs and steps isn't - in many ways it's just like writing Job Control Language (JCL) back in the mainframe era, but that's a story for another time).
While the latest version of the GitHub command line client gh
sports lovely new features for workflows and actions, it doesn't quite give me the quick cross-repository overview that I'm looking for. So I decided to combine three of my favourite terminal power tools to help me:
gh
, which is already a very accomplished command line interface (CLI) to GitHub and a really comfortable way of using the APIfzf
, which is a powerful fuzzy finder utility and provides just enough features for me to build simple terminal user interfaces (UIs) withI combined them to build a "workflow browser". Here it is in action:
It consists of three parts:
a new environment variable GH_CACHETIME which I can set globally to be nice to the GitHub API servers (I'm not changing workflows that often so a generous cache time of 1 hour works for me)
the main workflowbrowser
script which finds workflow definitions across my content on GitHub and presents them in a list to search through
a separate showgithubcontent
script that displays the content of a resource in one of my GitHub repositories
The showgithubcontent
script was initially a function inside of the workflowbrowser
script but I separated it out, first because it felt better and second because there was something more I could do once I'd browsed the workflow definitions with workflowbrowser
and selected one - more on that later.
Here's the script in its entirety, as it stands right now:
#!/usr/bin/env bash
# Find and browse GitHub Actions workflow definitions.
# In addition to regular shell tools (such as sed), this
# script uses gh and fzf.
workflows() {
# Takes owner type (org or user) and owner name.
# Returns tab-separated list of owner/repo/workflowfile/path.
local ownertype=$1
local owner=$2
gh api \
--method GET \
--paginate \
--cache "${GH_CACHETIME:-1h}" \
--field "q=$ownertype:$owner path:.github/workflows/" \
--jq '.items[] | ["\(.repository.full_name)/\(.name)", .repository.owner.login, .repository.name, .path] | @tsv' \
"/search/code"
}
main() {
#Ā Calls workflows for my org and user.
cat \
<(workflows org qmacro-org) \
<(workflows user qmacro) \
| fzf \
--with-nth=1 \
--delimiter='\t' \
--preview='showgithubcontent {2} {3} {4} yaml always' \
| cut -f 2,3,4
}
main "$@"
There's just a main
function and a workflows
function.
The main
function calls the workflows
function a couple of times, because I have repositories under my own user qmacro and also under a small experimental organisation qmacro-org and I have workflows across both of these owner areas.
In learning more about Bash I've found it's helpful to know the terms for various aspects, so I'm going to point out one here. I'm calling the workflows
function twice, like this:
cat \
<(workflows org qmacro-org) \
<(workflows user qmacro)
This to me was the simplest way of combining output from two calls into a single stream, using cat
. The <(...)
is process substitution. This is very useful when you want to supply some data to a command, which is expecting that data to be in a file, but where you don't have a file, and instead want to generate the data on the fly, and have it provided as the output of some execution. Here I'm using process substitution to call the workflows
function a couple of times, and have the output from those calls supplied to cat
. As if I did this: cat firstfile secondfile
.
I did a 10 minute video on process substitution on Hands-on SAP Dev, in case you're interested: Ep.39 - Looking at process substitution.
I'll dig into the workflows
function shortly, but for now, we need to know what it outputs, to understand better what we do with that output, i.e. what we do downstream from cat
in the pipeline.
The output from workflows
are records representing workflow definitions, in the form of lines with tab-separated fields, like this:
qmacro-org/test/dump.yml qmacro-org test .github/workflows/dump.yml
qmacro/showntell/main.yml qmacro showntell .github/workflows/main.yml
qmacro/qmacro/build.yml qmacro qmacro .github/workflows/build.yml
In order, the fields represent:
The lines are piped into fzf
which is used to present the workflow definitions and also a preview of their contents. This is done by using various options supplied to fzf
.
The first option deals with what to show in the basic list display that fzf
first presents, and that is the contents of the first field above (the combination). This is done using the --with-nth
option; we also tell fzf
how the fields are delimited:
--with-nth=1
- use field 1 in the list display--delimiter='\t'
- fields are tab-delimitedThen there's what to do from a preview perspective; when a particular entry in the list is selected, fzf
can run a preview command to display something in a window:
--preview='showgithubcontent {2} {3} {4} yaml always'
Whatever is produced (via STDOUT) by the incantation supplied with the --preview
option is shown in the preview window. Here, we call the showgithubcontent
script, supplying that script with 5 arguments. The first three use fzf
's field reference syntax to pass the values of the second, third & fourth field, i.e. the repo owner, the repo name and the workflow file path. The last two arguments control how showgithubcontent
displays things (we'll come to that later).
With fzf
, if an item in the list is indeed selected, then the line passed into fzf
that represents the line selected is output to STDOUT. This makes fzf
a very powerful tool that plays well with other tools, following the Unix philosophy (if no selection is made, e.g. by aborting fzf
with Ctrl-C, then nothing is emitted).
The final part of the main
function takes the line emitted from fzf
and outputs the same three fields (repo owner, repo name and workflow file path). Basically field 1 is just used as a "display" field for fzf
.
The workflows
function is basically a wrapper around a call to the GitHub Search API. This is an API that I haven't used before now, and it's pretty powerful. There are different endpoints representing different search approaches. What worked for me, to find workflow definitions, was to use the Search code endpoint with /search/code
.
This endpoint takes the search criteria in the form of a query string parameter q
, and it was very easy to use the GUI based search to try out different search parameters to figure out what I needed to specify. Here's an example:
https://github.com/search?q=org%3Aqmacro-org+path%3A.github%2Fworkflows%2F
One thing that tripped me up at first was the wrong type of request was being made first of all. I supplied the search criteria value in the q
query string parameter correctly, like this (as you can see in the function):
--field "q=$ownertype:$owner path:.github/workflows/"
but the HTTP call that gh
then made for me was a POST request, with this search query parameter in the body of the request. That wasn't right. Checking in the API documentation, the q
parameter needs to be in the query string. Explicitly setting the method to GET made this right:
--method GET
There are a couple of other "housekeeping" parameters used here too:
--paginate
--cache "${GH_CACHETIME:-1h}"
I don't yet have that many workflow definitions, but if it comes to that, gh
will work through the responses to get them all for me with --paginate
.
And the --cache
parameter works both ways: my activities are well behaved when it comes to using the API endpoints, and also, after the first time the list of workflow definitions is retrieved, any subsequent uses of the workflow browser are that much snappier (this works also with the similar use of the --cache
parameter in the showgithubcontent
script we'll see shortly). Note that if there's no value specified for GH_CACHETIME
, the default will be 1 hour (1h
) through the use of shell parameter expansion.
Next we come to the use of the --field
parameter, which allows me to specify the name and value for the search parameter q
. I looked at the Searching code documentation to find out about the ownertype:owner
specification. The first time around this value will be "org:qmacro-org" and the second time around it will be "user:qmacro". Moreover, with path
I can search for content that appears at a specific location - see Search by file location.
For those wondering, GitHub Actions workflow definition files are stored in the
.github/workflows/
directory within a repository.
Last but not least I use --jq
parameter to supply fzf
with some jq
script that will parse and extract the data I need from the API's JSON output. I think it was in release 1.7.0 that this feature appeared, and it's a great idea - build in jq
to gh
so those that don't have jq
already installed can still benefit. I guess it also helps to establish jq
as the de facto standard for parsing and manipulating JSON.
If we add some whitespace to the jq
script passed with the --jq
parameter, we get this:
.items[]
| [
"\(.repository.full_name)/\(.name)",
.repository.owner.login,
.repository.name,
.path
]
| @tsv'
I think it's always easier to stare at a script like this when we see what it's going to be processing, so here's some sample output from the API call to the search endpoint (reduced for brevity):
{
"total_count": 7,
"incomplete_results": false,
"items": [
{
"name": "dump.yml",
"path": ".github/workflows/dump.yml",
"repository": {
"id": 331995789,
"name": "test",
"full_name": "qmacro-org/test",
"owner": {
"login": "qmacro-org",
"id": 75827316,
"type": "Organization"
},
},
},
{
"name": "main.yml",
"path": ".github/workflows/main.yml",
"repository": {
"id": 331995789,
"name": "showntell",
"full_name": "qmacro/showntell",
"owner": {
"login": "qmacro",
"id": 73068,
"type": "User"
},
},
},
{
"name": "build.yml",
"path": ".github/workflows/build.yml",
"repository": {
"id": 165207450,
"name": "qmacro",
"full_name": "qmacro/qmacro",
"owner": {
"login": "qmacro",
"id": 73068,
"type": "User"
},
},
}
]
}
Now we can understand what the jq
script is doing. It's working through the contents of the items
array (the search results) and piping each item into an array construction. The array construction is declaring three fields, a literal string and two properties:
the literal string contains two JSON properties .repository.full_name
and .name
, which are references with the \(...)
syntax. They're put into a literal string so I can add a slash (/
) between them
the two properties are the repository owner name and the repository name
Once constructed, the array is passed to @tsv
which puts the values into a nice tab-separated list.
I think it's fair to say that the output from the built-in
jq
works as if the--raw-output
flag has been specified (see the jq Manual), which is what we want.
This then produces the lines that we've seen earlier, i.e. ones that look like this:
qmacro-org/test/dump.yml qmacro-org test .github/workflows/dump.yml
qmacro/showntell/main.yml qmacro showntell .github/workflows/main.yml
qmacro/qmacro/build.yml qmacro qmacro .github/workflows/build.yml
These lines are then ready for piping to fzf
in main()
. Great!
Now let's move on to the second script, which is what fzf
calls to present the previews (i.e. with --preview='showgithubcontent {2} {3} {4} yaml always'
).
As I mentioned earlier, this was originally just another function inside the workflowbrowser
script, but I extracted it to use outside of that script too. You'll see why in a bit.
Here's the script in its entirety:
#!/usr/bin/env bash
# Takes owner, repo and path and shows content of that resource from GitHub.
# Also accepts optional language and colour parameter.
# Uses gh, base64 and bat.
declare owner=$1
declare repo=$2
declare path=$3
declare language="${4:-txt}"
declare color="${5:-never}"
gh api \
--cache "${GH_CACHETIME:-1h}" \
--jq '.content' \
"/repos/$owner/$repo/contents/$path" \
| base64 --decode -i - \
| bat --color "$color" --theme gruvbox --plain --language "$language" -
This is so simple as to not even warrant the main()
function based approach to organisation. At least not yet. So what does it do? It expects five parameters, that we've seen already:
The last two parameters are specific to the bat
tool, which is a posh version of cat
- bat
's home page calls it "a cat clone with wings".
The reason we need the first three parameters is because they're required in the call we need to the GitHub Contents API. With this endpoint:
/repos/{owner}/{repo}/contents/{path}
we can retrieve the contents of a resource (a file) in a repository.
Let's have a look what this gives us, in a sample call, for the following values:
$ gh api /repos/qmacro/showntell/contents/.github/workflows/main.yml
{
"name": "main.yml",
"path": ".github/workflows/main.yml",
"size": 387,
"url": "https://api.github.com/repos/qmacro/showntell/contents/.github/workflows/main.yml?ref=master",
"type": "file",
"content": "bmFtZTogYWRkX2FjdGl2aXR5X2NhcmQKCm9uOgogIGlzc3VlczoKICAgIHR5\ncGVzOiBvcGVuZWQKCmpvYnM6CiAgbGlzdF9wcm9qZWN0czoKICAgIHJ1bnMt\nb246IHVidW50dS1sYXRlc3QKICAgIG5hbWU6IEFzc2lnbiBuZXcgaXNzdWUg\ndG8gcHJvamVjdAogICAgc3RlcHM6CiAgICAtIG5hbWU6IENyZWF0ZSBuZXcg\ncHJvamVjdCBjYXJkIHdpdGggaXNzdWUKICAgICAgaWQ6IGxpc3QKICAgICAg\ndXNlczogcW1hY3JvL2FjdGlvbi1hZGQtaXNzdWUtdG8tcHJvamVjdC1jb2x1\nbW5AcmVsZWFzZXMvdjEKICAgICAgd2l0aDoKICAgICAgICB0b2tlbjogJHt7\nIHNlY3JldHMuR0lUSFVCX1RPS0VOIH19CiAgICAgICAgcHJvamVjdDogJ3Ax\nJwogICAgICAgIGNvbHVtbjogJ3RoaW5ncycK\n",
"encoding": "base64"
}
(Output is reduced for brevity again).
The content isn't what we might first expect - where's the YAML? It's Base64 encoded, so we need to grab the value of the content
property (which we do with --jq '.content'
) and decode it. The handy base64
command is ideal for that.
Once decoded, the workflow definition YAML content is piped into bat
, with the following parameters:
--color "$color"
- do we want colour? In preview mode, always (which is why we pass always
in the call from the other script) but unless we're explicit about that, bat
won't use colour. This is because of the shell parameter expansion in the declaration of the color
variable: "${5:-never}"
, where the literal string "never" is used as a default value if none is supplied.--theme gruvbox
- who doesn't like a little gruvbox theming?--plain
- this turns off any of the bat
"chrome" like line numbers and headings.--language "$language"
- this tells bat
about the content, in the form of a hint as to what language it is and therefore how to syntax highlight it.And don't miss the final -
passed to bat
, that's to tell it to read from STDIN.
Embracing the Unix philosophy
That'a about it for the two scripts. I've found them to be useful and have had fun creating them. Really it's just glueing together different tools, that's sort of the point, part of the Unix philosophy in general.
And talking of that, here's the reason I split out the showgithubcontent
function into a separate script. It's because I wanted to be able to browse the workflow definitions, but then if I selected one, I wanted to be taken into an editor with that definition's contents. And with a proper shell (like Bash, or most other Unix shells) this is simple:
$ workflowbrowser | xargs showgithubcontent | vim --not-a-term -
That is:
workflowbrowser
and take the selected output from workflowbrowser
(which will be the three values that fzf
emits when I select a workflow definition) and, by piping them through to a call to xargs
, send them as parameters to showgithubcontent
showgithubcontent
- it's been used in fzf
's preview window, but now we're calling it explicitly, for the selected definition, without the two extra arguments "yaml" and "always" so that the the workflow definition is output without adornmentvim
, my editor, where I tell it to read from STDIN (that's the use of -
) and, using --not-a-term
, tell it that its startup context is not a terminal (it's a pipe) so that it won't issue any warnings along those linesHere's an example of that pipeline flow in action:
I hope you found this useful and perhaps it will encourage you to create your own utility scripts using gh
and fzf
.
jq
to produce JSON, while writing a script to enhance my Thinking Aloud journal entry titles.
In my Thinking Aloud journal, the entries are issues in a GitHub repository. To reduce friction I decided to just use the current date and time for the journal entry title.
That's worked fine, but in the overview of the issues it wasn't really practical to pick out the one I wanted to read or edit, because all I had to go on was the timestamp. Of course, I could scan the recent entries but that would quickly become a little limiting as the number of journal entries grows.
In a small script preptweet, used when automatically tweeting about new entries, I was extracting the first 50 characters from the body and using that in the tweet. You can see an example in this tweet - "I've been thinking about field naming conventions today ā¦".
I thought this would be a useful string to have in the journal entry (issue) titles too, so I wrote a script appendtitle that would do that for me for the existing issues. I have yet to decide how to modify the process of creating a new journal entry (I could just have this script run as a separate job step in the workflow I already have, for example).
appendtitle contains essentially a single incantation, deliberately so, in my journey to practise my scripting. It's not the most readable but it helps me think about pipelining and how data flows through such a pipeline.
I thought it might be useful to share and explain, in case others are on a similar journey. In it, I'll show how I used jq
to cleanly produce JSON - I normally consume JSON with jq
, so this was a nice departure.
1 #!/usr/bin/env bash
2
3 # Convert journal entry issues where the issue title is currently
4 # just the date and time stamp, by adding the first <length> chars
5 # of the issue body to the title.
6
7 readonly length=50
8
9 gh api "/repos/:owner/:repo/issues?state=open&labels=entry" \
10 --jq ".[] | [.number, .title, .body[0:$length]+\"ā¦\"] | @tsv" \
11 | grep -E '\t\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\t' \
12 | sed 's/\t/ /g' \
13 | while read -r number date time text; do
14 newtitle="$date $time $text"
15 jq -n --arg key title --arg value "$newtitle" '{($key):$value}' \
16 | gh api "/repos/:owner/:repo/issues/$number" --input -
17 sleep 0.25
18 done
Here's a breakdown, by line:
9: Invoke the GitHub API with gh
to retrieve the open issues representing journal entries.
10: Use gh
's --jq
flag to pass a script to pull out the issue number, current title & first <length> characters from the body (plus an ellipsis to denote an elision). Output these values in tab-separated format.
So far, here's typical output produced from lines 9 and 10:
19 2021-04-09 13:17:08 I've been thinking about field naming conventions ā¦
18 2021-04-07 16:27:58 One consequence of using repo issues for journal eā¦
15 2021-04-07 09:04:01 Does it make sense to create a workflow to clean uā¦
Those are tab characters between the three columns number, timestamp and text.
Continuing on:
11: The output produced is passed via grep
to check for a timestamp (nnnn-nn-nn nn:nn:nn) bounded on either side with tab characters (\t). This ensures that only those entries with a title that is (still) only a timestamp are processed (In constructing the pattern, I found it clearer to write out each of the \d
digits than use something like e.g. \d{4}
for the four-digit year).
12: Rather than mess around with tabs from this point on, sed
is used to convert each tab to a space; this will keep things simple for reading each "field" on the next line.
13: The output is now passed into a while
loop, where read
is used to capture each field. The default delineation is whitespace, so perhaps you're thinking "what happens to the words beyond the first one, for the text value on each line?". Well because there are no further variable names following text
in the read
invocation, text
gets the rest of the line, not just the next whitespace separated token. In other words, taking the first output line as an example, we don't just get "I've" in text
, but all of "I've been thinking about field naming conventions ā¦".
14: The values are marshalled into a new title format, in newtitle
.
15: Using jq
, a properly formatted chunk of JSON is produced, to prepare a payload value to pass in the GitHub API call to update an issue.
If the value of newtitle
was "2021-04-09 13:17:08 I've been thinking about field naming conventions ā¦", then this jq
invocation would produce this (including the whitespace):
{
"title": "2021-04-09 13:17:08 I've been thinking about field naming conventions ā¦"
}
16: This JSON thus produced can be then supplied in the API call, again using gh
, with the value -
(classically denoting "take from STDIN") for the --input
parameter.
17: A short pause between API calls keeps the GitHub API endpoint sweet, and we're done.
This is the first time I've used jq
to produce JSON, and it feels a lot safer than messing around with quoting, and the quotes required for the JSON format itself. Thanks jq
, and of course thanks GitHub API!
I started my computing adventure at the age of 11 on a minicomputer (see Computer Unit 1979) and then IBM mainframes featured heavily in the early and formational part of my career. Job definitions, job & step interdependencies, batch job execution and output management are in my blood.
The reliability, predictability, and perhaps even the ceremony of defining a job, submitting it to the right execution class, have it run, at some time, and then poring over the output after execution was finished, is something that still appeals to me. Even in today's world of always-on, I'd like to think that realtime, the ultimate opposite to batch, is in some senses overrated, or at least misunderstood.
All of my career, more or less, has revolved around SAP systems. The R in SAP R/2, which I worked with between 1987 and around 1995, stood for Realtime (and this was the name of the consulting company I joined to launch my career as a consultant / contractor, but that's a story for another time).
Realtime vs batch
What did realtime mean here? Well, it meant that human facing processes were exposed via screens, interaction with data was live, it happened there and then. Transactions could be executed directly. What SAP R/2 replaced was a completely batch oriented system where everything ran asynchronously and the idea of screens allowing access to and interaction with business processes was very new. Moreover, these business processes were integrated.
Of course, any SAP Basis person will tell you that while yes there are dynamic programs (dynpros) that allow immediate and interactive access in realtime, the batch concept is still alive and well in SAP systems. It was then in R/2 (with an overnight schedule of tens if not hundreds of interdependent jobs), and it even is today with SAP S/4HANA Cloud, and every other SAP system that is based upon the R/3 architecture. Yes, I'm talking about the batch processes, and even the update processes, that are part of the DISP+WORK design from the early 1990s.
So batch is still alive and well, in fact it never went away.
Moreover, while for very large organisations the mainframe lives on, especially in financial circles, the concept of the mainframe lives on too. The Eternal Mainframe is a great essay that muses on that and more.
Realtime vs resilient
And in today's era, the obsession with realtime seems to be spilling over into the API world, where folks are wanting to interconnect their systems in a loosely coupled way with realtime interfaces. While loose coupling is usually the right approach, realtime interfaces are a different beast. In some cases of course, synchronous communication, with blocking, is required. But in many cases it's not.
What the R should really stand for here is not Realtime, but Resilient.
(I'd like to take credit for this quotable nugget, but I have to attribute it to the person from whom I heard it first - my friend and SAP colleague Craig Stasila.)
And what does that mean, exactly? Well to me it means not synchronous, but asynchronous. Message (i.e. event) based integration. Message events that are fired by a system, with a payload, managed by a message bus, and received & processed by other systems. We've looked into this a lot on our Hands-on SAP Dev show, in particular the Diving into SAP Enterprise Messaging series (SAP's Enterprise Messaging service is now called Event Mesh, by the way).
Embracing & understanding the importance of this asynchronous nature might help folks to think about the nature of batch, too. Not everything needs to be immediate. Not everything must happen as soon as something else happens. If that was the case, then why are we seeing such a massive interest and use of GitHub Actions, which brings the whole idea, and appeal, of batch processing to the masses.
GitHub Actions and batch processing
While writing this I've realised that there's another layer to GitHub Actions that adds to the appeal for me. When I first encountered batch processing, at Esso Petroleum at the start of my career, I spent many a happy hour writing Job Control Language (JCL), monitoring jobs, and obsessing over the detail of their output messages. One thing that was almost unspoken in this is that sitting at my silent terminal, I had no idea at the time where the machines were that processed my jobs, what they looked like, sounded like, nor did I have to care. They were looked after by the system operators.
And so it is with GitHub Actions. Unless I'm using self-hosted runners, I have no idea about the machines upon which the jobs defined in my workflows are run. I don't know where they are, whether they're real or virtual, nothing. And as long as I remain within my execution quota, I don't have to care, either. Again, that's someone else's task.
SDSF for GitHub Actions
Anyway, I'm not really sure where I'm going with this post. I'd started out with the intention of explaining a little bit as to why, to GitHub Actions product manager Chris Patterson's question "If you had one wish for GitHub Actions what would it be?", my answer was:
"SDSF for workflow/job execution and output. Please :-)"
IBM's System Display and Search Facility (SDSF) was how I navigated the output from batch jobs that had executed. How I searched, sorted, viewed, printed and purged output. How I found patterns in what was happening in the area for which I was responsible. Using a powerful and classic terminal user interface (TUI) design which fit well with the Interactive System Productivity Facility (ISPF) world where we spent our working hours.
I think I'll leave the explanation for why I think it would suit the GitHub Actions ecosystem, for next time. Until then, I'll leave you with a screenshot (courtesy of Trent Balta and the IBM Community) of SDSF in action.
]]>tmux
. Here's what I did.
I've been starting multiple tmux
sessions, one for each project I'm working on, and ensuring that I'm in the "right" base directory for each of those projects before actually creating the corresponding tmux
session. That way, each new window or pane I open places me in that project's base directory. Which isn't too bad.
But I'm trying to move to a simpler workflow, and use fewer tmux
sessions. This meant I hit on that possibly age-old issue of being in the "wrong" directory when I create a new window or pane, and having to cd
to where I want to be. Which is usually where I just was before invoking the new window or pane command!
I did a bit of digging and found the answer, which was quite simple in the end - and I would have known about it with a better level of knowledge about tmux
. I had written a short essay on that subject: Deeper connections to everyday tools, and in that essay, I'd made a reference to this new discovery.
Christian Drumm asked me today about this very thing, so I thought I'd write this short entry that I could refer to for Christian and also for others.
Basically it involves using the -c
option when creating a new window or pane, and passing the value of the built-in variable pane_current_path
(see the manual for info).
I initially got this info from a gist by William Heng but this Stack Exchange answer by Chris Johnsen has some great background which is worth reading too.
The change I made to my tmux
configuration is in this commit:
https://github.com/qmacro/dotfiles/pull/2/commits/2664669d5922e640b232f185e2045e412852f47c
and looks like this:
bind c new-window -c "#{pane_current_path}"
bind '-' split-window -c "#{pane_current_path}"
bind '\' split-window -h -c "#{pane_current_path}"
bind C new-window
bind '_' split-window
bind '|' split-window -h
Basically I'm now set up to enjoy the new behaviour (opening new windows and panes in the current working directory) when I use the keys I normally use:
c
- new window-
- new vertically split pane\
- new horizontally split paneBut I've added three extra bindings in case I want the old behaviour, bindings to the "shifted" version of those keys:
C
- new window_
- new vertically split pane|
- new horizontally split paneI've not found myself using these extra bindings for the old behaviour yet, and I'll probably end up removing them.
Anyway, there you have it. Thank you William and Chris for the help!
]]>Something mildly profound emerged from the combination two recent activities:
The successful maintenance of that beautifully designed manual lever espresso machine did take a while, but during it I guess I formed a deeper relationship with the device, built upon the existing connection I had already from the constant enjoyment & challenge of getting everything aligned to pull a decent shot.
And the items I sold (SONOS speakers, an old Macbook Pro) are items I've not really had any relationship with at all. Yes, I used the speakers, but not every day, and since SONOS's meltdown last year an active distancing and dislike has grown between me and the devices.
What was profound was that the lack of relationship I had with the stuff I just sold on eBay actually amplified the deep relationship I feel with the La Pavoni.
Tools I use often in the kitchen
I'd been thinking about tools I use often, since noticing how worn my hand milk frother was recently.
I've had that milk frother for about 10 years. I've had a moka pot for about that long too - originally one from Bialetti, which I eventually replaced with one from IKEA (which is surprisingly excellent).
And I've had the La Pavoni Professional Lusso for almost 2 years.
Give or take, I've used each of these items every single day since I've had them. Often more than once per day. (In case you're wondering, I make M's latte with the moka pot and froth the milk manually, as that's how she prefers it, and I make my espresso with the La Pavoni).
These are just examples of course, but they're very visceral because I use all of them with my hands and what they produce is also consumed by me and M.
There's something special about tools like this. The bond, the attachment, the relationship that builds is more something than nothing. Anyway, before I get too philosophical, I'll get to the other half of this post, which is about tools I use at work.
Tools I use often at work
I like the command line. Give me a terminal over a GUI any day. The command line is a rich and powerful environment because of the expressive nature and the closeness you feel to the things you're trying to do (or manipulate).
That power comes from the combination of two things, the shell, and the commands available to you in your path (for more on the shell, see Waiting for jobs, and the concept of the shell).
Without thinking too hard, here's a list of commands, of tools, that I use in the context of the shell, every single day:
vim
(editor)tmux
(terminal multiplexer)curl
(HTTP client)fzf
(fuzzy finder)jq
(JSON processor)(One could say that the combination of vim
, tmux
and the shell is my IDE.)
Of course, I use other commands too, and many Bash shell builtins & features, but I'd say these are tools that I find essential.
More learning required
As well as being daily drivers, regardless of the task at hand, what else do these tools have in common?
Well, to be honest - there's still much that I don't know about them.
In many ways, one could argue that these tools represent the zenith of achievement in their area:
vim
tmux
is the de facto standard for managing terminal sessionscurl
is possibly the most popular HTTP client mechanism out there, in command line tool form as well as in library formfzf
recently, and I tend to agree: "I donāt think any other single cli tool has ever had such a big and positive impact on my workflow than fzf has, itās really a great piece of work".fx
, it's jq
that everyone turns to, to handle JSON data on the command lineSo while at least the La Pavoni machine has moving parts, it's still a block of stone compared to these tools, which all have such rich and varied features.
Here are a few example of what I've only recently discovered, or perhaps uncovered, with these tools.
shfmt
to pretty-print my shell scripts on savetmux
to open a new window or pane in the same directory as I was when I invoked the open command--data-urlencode
to have values automatically URL encoded with curl
jq
as a complete language, with my first script with function definitionsAs those lovely folks that join my live stream sessions* know - I'm not afraid of admitting that "I've no idea what I'm doing".
*I live stream usually weekly on Friday mornings UK time - look for the Hands-on SAP Dev episodes on the SAP Developers YouTube channel.
At the beginning of last year, along with other folks in the SAP Community, I wrote up my learning list for 2020. In it, I had a section titled "Understanding core things better", and while that contained the kernel of the idea that I want to improve my understanding of fundamental things, I think I missed the mark somewhat. I failed to spot the tools that were right in front of me (or my fingers).
So I guess this is a reminder for me that I'm nowhere near done. That's fine, continuous learning is a thing, and as it is for many others, it's my thing.
Triggered by some mundane moments recently (eBay activities, gasket maintenance, the wearing thin of a simple wooden handle), I've come to realise what I need to do. And that is far from mundane. It won't be a short process -- I think mastery of these tools will only come over years -- but the journey will enjoyable and rich from the outset.
]]>curl
, using a two-phase approach.
I'm becoming more familiar with the YouTube API surface area, and a task recently required me to look into an efficient way of uploading videos to a YouTube channel. While I managed the upload technically, it was ultimately in vain due to a recent change to the terms of service. But it's still worth sharing the two-phase approach that I was able to take.
The YouTube Data API has a Videos: insert facility. It's worth reading through this, and, if you get the chance, through other areas of the API, because they're quite similar, and what appeared initially a little overwhelming to me has become more familiar.
The approach revolves around the following flow:
Here's a brief example based on a throwaway script I created to use the API.
Preparing the video resource
I don't like writing JSON by hand, I prefer writing YAML and then having it converted to JSON on the fly. Here's a function I wrote to produce the video resource:
videoresource() {
yq e -j -I=0 - <<EODATA
snippet:
categoryId: 28
title: The video title
description: |
A longer description that can run over
several lines if needed. This is the text
that appears beneath the video on YouTube.
tags:
- sap
- bash
- jq
- scripting
- btp
status:
selfDeclaredMadeForKids: False
privacyStatus: Unlisted
recordingDetails:
recordingDate: 2018-03-31T00:00:00Z
EODATA
}
I'm using yq
to evaluate (e
) the YAML and emit JSON (-j
). The -I=0
tells yq
to put all the JSON output on a single line (by default it will nicely pretty-print it with whitespace).
Posting the JSON video resource
In making a POST request to the Videos: insert endpoint, you need to specify the part
parameter, which, amongst other things, describes what you're sending in the video resource. I prepare my part
parameter like this (and yes, I know I should URL encode the values, but hey, it's a throwaway script and it worked):
urlparameters() {
paste -s -d'&' - <<EOPARM
part=snippet,status,recordingDetails
notifySubscribers=False
uploadType=resumable
EOPARM
}
As well as adding the optional parameter notifySubscribers
I also added the uploadType
parameter. While not directly documented on the Videos: insert page, it appears in the Ruby code sample there and seems to be quite important.
Using the two functions thus defined, it's a straightforward matter of using the Swiss Army toolchain of HTTP clients, the venerable curl
:
curl \
"https://www.googleapis.com/upload/youtube/v3/videos?$(urlparameters)" \
--verbose \
--header "Authorization: Bearer $(tget)" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data "$(videoresource)"
You'll need to supply an OAuth access token as the value for the Bearer token in the Authorization
header - I want to focus on the actual two-phase upload here so I'll leave the tget
script that I have for another time.
Here's a slightly redacted snippet of the HTTP request and response:
> POST /upload/youtube/v3/videos?part=snippet... HTTP/2
> Host: www.googleapis.com
> User-Agent: curl/7.64.1
> Authorization: Bearer ya23supersekritaccesstokenhunter2
> Accept: application/json
> Content-Type: application/json
> Content-Length: 111
>
< HTTP/2 200
< content-type: text/plain; charset=utf-8
< content-type: application/json; charset=UTF-8
< x-guploader-uploadid: ABg5...
< location: https://www.googleapis.com/upload/youtube/v3/videos?part=snippet...&upload_id=ABg5someuniqueuploadidentifier
< content-length: 0
< date: Tue, 30 Mar 2021 07:14:47 GMT
< server: UploadServer
While there are a couple of odd aspects to that HTTP response (see below), what we're looking for here is the Location header. The URL there is the one to which we must now send the binary data of the video.
Sending the binary data
In the first phase we sent the JSON representation of the video resource, the video's metadata, effectively. In this second phase we now send the video content itself, to the URL in the Location header in the first phase's response.
With curl
, sending binary data in a file is easier than you think:
curl \
"https://www.googleapis.com/upload/youtube/v3/videos?part=snippet...&upload_id=ABg5someuniqueuploadidentifier" \
--header "Authorization: Bearer $(tget)" \
--data-binary @videofile.mp4
Wrapping up
That's pretty much it. I must say, I have struggled to get my brain around some of the YouTube API surface area, but the mist is starting to clear. If you're like me and also trying to grok things, perhaps this post will help a little.
Oh yes, and those odd aspects to the first phase HTTP response earlier?
Well, for a start, why are there two different Content-Type headers?
More importantly though, sending an HTTP 200 response to the request seems a little suspect. It's HTTP status 201 CREATED that is appropriate here, not 200 OK. And while a Location header in an HTTP response is appropriate (and more or less required) with a 201 CREATED status, with a 200 OK status it is not.
]]>I'd been slightly apprehensive about replacing the gaskets on my La Pavoni lever espresso machine, as I'm not particularly skilled at this kind of thing and didn't want to break anything. But I've just gone through the process and things seem to have worked out OK, and I wanted to share that information - because if I can do it, you can too.
I'd started to notice some leakage in the grouphead - once the machine was up to temperature and pressure, water would drip out into the drip tray (or cup). Sometimes only a few drops; but at other times almost up to an espresso cup's worth.
Related, I'm sure, was the fact that the lever action, both up and down, was far from smooth. It was, how can I put it, bumpy and uneven, as though something was rubbing or catching. The odd thing is that because this happened over time, I didn't actually notice it in the early stages. But it became quite severe and I guessed it was a gasket issue - which was very likely related to the leakage too.
I watched and re-watched various videos on YouTube, and found that this one from Sam Stiles was the most helpful: La Pavoni- 5 piece gasket replacement. I must have watched that one at least 7 or 8 times before I started.
I ordered a La Pavoni Lever Grouphead Service Kit (New Group) from The Espresso Shop. I chose this to order as I wasn't sure what I needed, and in fact it came with some parts that were a nice bonus (more on that shortly).
Following the process demonstrated very ably by William Stiles in the video, I managed to remove the lever and then the grouphead, and then the piston.
Having watched how William used his tools, I ordered a set of snap ring pullers to be able to remove the snap ring holding the piston shaft gasket in place.
Having seen one used in another video, I also ordered a cheap hook and pick set. I wasn't entirely sure I'd need them but in fact they were very handy to have - I found the 90 degree angled one useful for prying the old gaskets off, and for retrieving the aforementioned piston shaft gasket.
I already had a rubber mallet so with the arrival of those online purchases, I was all set.
I went slowly and carefully, and everything pretty much went as William described. Here are some observations and experiences that I thought might be helpful.
Lever attachment
The lever attachment section was pretty dirty - especially the bolts and the roller nut; while I cleaned the lever itself, and the top of the grouphead, I didn't need to clean the bolts or the roller nut, as the service kit included new ones, and also new clips (called "circlips", which was new to me) that hold the bolts in place.
Removing the piston
It was quite difficult to remove the piston from the grouphead; I whacked it with the rubber mallet just like William did, with some force, but it wasn't budging.
But after removing the large gasket at the bottom of the grouphead, the one holding the shower screen in (i.e. the one that the top of the portafilter touches when you attach it when about to pull a shot) ... one more whack with the mallet did it and it came out much more easily.
With it came a small amount of water - luckily, I'd placed a towel where the drip tray usually goes, not for any water, but to prevent the piston scratching the chrome as it shot out of the bottom.
Cleaning the piston
I made sure to clean each and every part I could get access to, but the piston itself was by far the most grimy. I spent about 10 minutes with some hot soapy water and a gentle panscrub to remove a layer, which was quite greasy (from the coffee oils, I guess).
This was after removing the old piston gaskets, which looked pretty knackered. I found the pick set was useful for this, by the way.
Replacing the piston gaskets
After removing the piston, I found that this was one of the hardest things to do. It was easy enough to pry the old gaskets off, but the new ones weren't for going on easily. In another video, I saw that the person had pre-soaked the gaskets in warm water for 5 minutes to make them a little more pliable.
Doing this helped, but it still took me a couple of attempts. First of all I managed to get one on, but when I looked, it had twisted around and the "groove" was facing outwards. So I had to remove it and try again.
Removing it was difficult too, I didn't want to use the pick, or even a flathead screwdriver as they both had sharp bits and I didn't want to damage the new gaskets. But then I realised I could use the thin end of a teaspoon. Smooth, and, as it turns out, ideal!
I made sure I put the gaskets on as William had instructed, i.e. like the shape of a guitar body.
The piston shaft gasket, washer and snap ring replacement
Getting to the piston shaft gasket was fiddly but doable - mostly thanks to one of the snap ring pullers in the set I bought. Definitely recommended. I've no idea how I would have removed the snap ring without it, and I'm pretty certain I would have had no chance to put it back either.
By the way, the service kit also included a new snap ring, plus the washer that sits between the ring and the gasket. That was nice, as both the ring and washer were quite dirty and some limescale had built up there too. The gasket itself was pretty decrepit, at least as worn as the piston gaskets, if not worse.
Re-inserting the piston
When I was ready to re-insert the piston, I made sure to lightly grease the piston itself (around the head, including the gaskets), and put a small amount on the shaft. This helped with the insertion, but only after I'd fiddled around with a spoon to squash in the flanged part of the topmost piston gasket so it would go back into the grouphead.
You see William doing this, but as he did it so deftly, I didn't notice at first. It took me a minute or so to get this done.
Re-attaching the lever
Once the piston was in, the lever re-attachment was pretty easy. One thing I found fascinating is that the lower of the two nuts that are screwed onto the top of piston shaft is to provide an appropriate "stop" point so that the piston doesn't go too low. I found that I had to adjust that nut a little bit as, later, when I attached the portafilter, the lever handle was touching the portafilter handle.
One thing that was lovely to see - and feel - was the huge difference this maintenance made. In both directions, the lever action (and more importantly piston action) was totally smooth, I couldn't believe how much better it was.
Re-attaching the shower screen and filter holder gasket
The final task was to re-attach the screen at the bottom of the grouphead. I say "re-attach", but in fact the service kit also came with a new one, so I used that.
Regarding the gasket - there were two in the service kit - one thinner one with a round cross-section, and one fatter one with a U cross-section. I guessed (correctly, I think) that it was the fatter one that I needed, based on the size of the old one that I'd removed.
The idea is that you slide it over the screen, from the bottom of the screen up to the lip, first, and then insert both into the bottom of the grouphead.
But as this fatter one had a U cross-section, there was a chance that I'd slide it up and insert it the wrong way.
And I did.
First of all I'd put it in with the "flat" part (the top of the U) facing upwards up into the grouphead. However, I couldn't get the portafilter in. After a few moments head scratching, I realised that the gasket must be the wrong way round and that the top of the U was preventing it from being pushed far enough up.
I pulled it gently back out (using the smooth edge of the teaspoon again) and slid it back over the screen, this time so that the flat part of the U was facing downwards (like this: ā©) and would come into contact with the top of the portafilter when inserted.
This was much better, and I could use the portafilter (without the basket, as William demonstrated) to push the screen and gasket up into the grouphead.
Update: I realised after posting this, and re-examining the schematic diagram that came in the service kit, that the "other" gasket was perhaps not an alternative filter holder gasket, but a group sleeve gasket - item number 77 in the schematic (see later for a section of that diagram). The group sleeve is the light-coloured plastic or bakelite cylinder inside of the grouphead, inside which the piston moves. I couldn't get this off as I didn't have the right tool, but I'd decided that that was fine, I'd do that next time. Anyway, I think I now realise what this "spare" gasket is for.
The group to boiler gasket
I didn't forget to replace the gasket between the grouphead and the boiler itself, that was the easiest part. It's important to note here, however, that I heeded William's advice not to over-tighten the two bolts that hold the grouphead onto the boiler. On tightening them, I came to feel a natural "stop" and didn't apply any further torque.
The entire process took about 90 mins, as I was going very slowly and also re-watching parts of the video as needed. I'm not very dextrous but managed to complete the service successfully.
I thought this experience and process was worth sharing, especially for folks that might be in my position right now - thinking or knowing you need to do it but being a little apprehensive.
It's doable, and definitely worth it!
.
]]>gh
, jq
, fzf
and the GitHub API
Yesterday, while thinking aloud, I was wondering how best to mass-delete logs from GitHub Actions workflow runs. Such a feature isn't available in the Web based Actions UI and my lack of competence in the Actions area means that I have a lot of cruft from my trial and error approach to writing and executing workflows.
The GitHub Workflow Runs API
I knew the answer probably was in the GitHub API, and it was - in the form of the Workflow Runs API. There are various endpoints that follow a clean and logical design. Workflow runs are repo specific, and to list them, the following API endpoint is available to access via the GET method:
GET /repos/{owner}/{repo}/actions/runs
Following this straightforward URL-space design, a deletion is possible thus:
DELETE /repos/{owner}/{repo}/actions/runs/{run_id}
Incidentally, I like the use of "owner" here - because a repo can belong to an individual GitHub account (such as qmacro) or an organisation (such as SAP-samples), and "owner" is a generic term that covers both situations and has the right semantics.
Requesting the workflow run information with gh
To make use of these API endpoints, I used the excellent gh
GitHub CLI, specifically the api facility. Once authenticated, it's super easy to make API calls; to retrieve the workflow runs for the qmacro/thinking-aloud
repo, it's as simple as this (some pretty-printed output is also shown here):
; gh api /repos/qmacro/thinking-aloud/actions/runs
{
"total_count": 22,
"workflow_runs": [
{
"id": 686610826,
"name": "Generate Atom Feed",
"node_id": "MDExOldvcmtmbG93UnVuNjg2NjEwODI2",
"head_branch": "main",
"head_sha": "24822bfb34573c0dc2fb6b0f83c42a1752a324d9",
"run_number": 13,
"event": "issues",
"status": "completed",
"conclusion": "skipped",
...
Making sense of the response with jq
The response from the API has a JSON representation and a straightfoward but rich set of details. This is where jq
comes in. I started with just pulling out values for a few properties like this:
; gh api /repos/qmacro/thinking-aloud/actions/runs \
> | jq -r '.workflow_runs[] | [.id, .conclusion, .name] | @tsv' \
> | head -5
686610826 skipped Generate Atom Feed
686610824 skipped Tweet new entry
686610823 skipped Render most recent entries
686471644 success Render most recent entries
686157878 success Render most recent entries
There's built-in support for pagination with
gh api
, with the--paginate
switch, which is handy.
Breaking the jq
invocation down, we have:
Part | Description |
---|---|
-r |
Tells jq to output "raw" values, rather than JSON structures |
.workflow_runs[] |
Process each of the entries in the workflow_runs array |
[.id, .conclusion, .name] |
Show values for these three properties |
@tsv |
Convert everything into tab separated values |
Notice the use of the |
symbol too - the output of .workflow_runs[]
is piped into the selection of properties, and the output of that is piped further into the call to the builtin @tsv
mechanism.
I ended up using this approach, but in a slightly expanded way, using a couple of helper functions:
.created_at
property easier to read (for example changing "2021-03-26T09:10:11Z" into "2021-03-26 09:10:11").conclusion
property into simpler and shorter termsdef symbol:
sub("skipped"; "SKIP") |
sub("success"; "GOOD") |
sub("failure"; "FAIL");
def tz:
gsub("[TZ]"; " ");
.workflow_runs[]
| [
(.conclusion | symbol),
(.created_at | tz),
.id,
.event,
.name
]
| @tsv
Presenting the list with fzf
Now all that was required was to present the list of workflow runs in a list, for me to choose which ones to delete. The wonderful fzf
came to the rescue here. If you've not heard of fzf
, go and read all about the command line fuzzy-finder right now. I've written a couple of posts on this very blog about fzf
basics too:
This is how I combined the gh
, jq
and fzf
invocations, inside a selectruns
function:
gh api --paginate "/repos/$repo/actions/runs" \
| jq -r -f <(jqscript) \
| fzf --multi
With the --multi
switch, fzf
allows the selection of more than one item.
Then it was just a case of processing each selected item, and making use of that other API endpoint we saw earlier inside a deleterun
function, like this:
local run id result
run=$1
id="$(cut -f 3 <<< "$run")"
gh api -X DELETE "/repos/$repo/actions/runs/$id"
[[ $? = 0 ]] && result="OK!" || result="BAD"
printf "%s\t%s\n" "$result" "$run")
The use of cut
was to pick out the id
property in the list, as presented to (and selected via) fzf
; the list is tab separated (thanks to @tsv
) and cut
's default delimiter is tab too, which is nice.
The script in action
That's about it - here's the entire script in action:
And you can check out the script, as it was at the time of writing, in my dotfiles repository here: dwr
.
TL;DR - My Thinking Aloud repo is where I am experimenting with journalling via GitHub issues. Check out the issues themselves, the rendered versions of recent entries, the Atom feed or the GitHub Actions workflows with which I automate some of the process.
I've been blogging for over 20 years, since 2000. I started with a Blogspot hosted blog over at https://qmacro.blogspot.com which amazingly is still around.
This blog
I quickly moved over to a self-hosted blogging system, initially powered by the beautifully simple Bloxsom.
Over the years I tried out various blogging software, including Ghost and Movable Type, but at the core I've had my main blog (now at https://qmacro.org - where you're reading this) since 2002. I'm currently using GitHub Pages to host and manage things and I'm quite happy with it. I see this blog as my main personal blog and a place for "long form" posts on various subjects (as you can see from the index).
My posts on the SAP Community blog
Of course, I also publish on the SAP Community blog, which is a collective set of posts by many, many members of the SAP ecosphere. I have posts under the dj.adams identifier and also (since I joined SAP) under the dj.adams.sap identifier, and as you might expect, the subject matter is very definitely SAP related. That said, you may be surprised at the breadth of topics - posts on subjects as diverse as terminal tips and fun runs are all there.
My autodidactics blog
In the middle of last year I started a new, secondary blog autodidactics to share things I'd learned (I endeavour to be a life long learner). I was inspired to create such a blog having seen Simon Willison's TIL (Today I Learned) site.
Moreover, I did very definitely feel I needed a place to share smaller nuggets of information that I'd learned; this in turn was triggered by reading some of rwxrob's repository of dotfiles and scripts.
Ever since I read through the entire source code base of the original Jabber (XMPP) server jabberd
to understand how everything worked, in researching for my O'Reilly book Programming Jabber, I've been a strong proponent of reading other people's code. There's so much richness out there, a variety of styles and approaches, and oh so much to learn.
And of course, when it comes to sharing thoughts, there's always Twitter, which has been referred to as a "microblogging" platform, in the same way that identi.ca was. The key difference between Twitter and identi.ca was that the former is centralised, and the latter (sadly no longer in operation) was distributed. With identi.ca I felt in more control of my microblogging efforts. Don't get me wrong, Twitter is a great platform for conversation and ideas, but it's still centralised.
Journalling
And so to Thinking Aloud. If I lay out the different outlets for my thoughts in decreasing order of magnitude, I end up with something that looks like this:
+---------------------------------------------------------------+
| Major | Minor | Mini | Micro |
|---------------|---------------|---------------|---------------|
| qmacro.org | autodidactics | (something | Twitter |
| SAP Community | | missing) | |
+---------------------------------------------------------------+
What do these categories mean to me?
Major: If I want to write something in the major category, that's a relatively significant investment in time to create and publish posts (and for the consumer it can be significant too). That's fine, and those posts definitely will always have their place.
Minor: If I want to share something specific that I learned, such as on the subject of the shell's declare
builtin (in Understanding declare), I have my autodidactics blog. The posts are usually shorter -- although some may be more densely packed -- and about something quite small and specific.
Micro: If I just want to share a fleeting idea (or rant), I have Twitter.
So I feel there's a gap, for the Mini category. I have been inspired by rwxrob's journalling, where he writes in relatively short form, but in a structured fashion. It seems a way of getting things written down, freeing up mental space for new ideas, and also a semi-cathartic approach to expressing thoughts, regardless of how fully formed (or not) they are.
One of the aspects that I like about the journalling that I've seen is that it's about the body of the journal entry first, and the title is not important. In fact, rwxrob's journal titles are timestamps, which seems a great way to avoid wasting brain cycles trying to think of a title, either before writing the entry (when you don't exactly know what you're going to write), or after (when you may have covered various topics in one entry).
So I've decided to try to feel my way into this Mini gap, and do some journalling of my own. The idea is that the amount of pre-thought, the level of friction & inhibition to create a new journal entry should reflect where this is in the "scale" expressed in the table above. I don't think much before tweeting (maybe I should, but that's a different story) and journalling is more towards that end of the scale than the other.
Using GitHub features
As part of the experiment, I decided to learn more about GitHub features while doing this, by making them a fundamental basis for the journalling mechanism.
I have a new GitHub repository thinking-aloud, and each journal entry is an issue in there. The beauty of GitHub issues is that Markdown is supported, plenty rich enough to express my ideas.
Moreover, there are other metadata aspects such as labels that I might want to take advantage of at some stage (think "categories" in Atom feed entries).
Not least is the chance for folks to engage with the journal entries, via reactions and comments. I'm not sure how this is going to pan out, but I want to at least give this aspect a chance. I may get no engagement, I may get a load of spam. Let's see.
Most interestingly (to me) is the way I create new journal entries, and how I build the Atom feed so folks can subscribe.
I create a new entry via a shell function j
, at the heart of which is this invocation:
gh issue create --title "$(date '+%Y-%m-%d %H:%M:%S')"
My editor (Vim) is then launched and I write Markdown, which is then sent to be the body of a new issue when I finish. Simple!
Each time a new journal entry (issue) is created, I rebuild the Atom feed. This is done via the power of GitHub Actions. Have a look at the generate-feed workflow to get an idea of how that works; in one of the steps there, I'm using gh
to call the GitHub API to get the list of issues, and piping that (JSON) into a simple Node.js script feed that uses the freakishly easy-to-use NPM module feed (thanks jpmonette!) to generate the Atom feed.
Additionally, I have implemented some simple rendering to make the entries easier to consume - the most recent entries are rendered into a Markdown file in the main repository, and GitHub's Markdown rendering is more than good enough to make things easy and pleasant to read.
Summary
And that's it, so far. As usual, I'm making this up as I go along, and things may change along the way. I've written a couple of journal entries already, check them out and let me know what you think.
]]>I'm attracted to the somewhat arcane details of Bash shell expansions and it was while looking up something completely different (more on that another time) that I decided to re-read the parameter expansion section of the GNU Bash manual.
In many of my scripts, like a good shell scripting citizen I check to ensure that there's a value for a parameter I want to use, and if there isn't, I abort with a message. In this case, abort implies returning a non-zero code, indicating failure.
The most recent example is in a little script that I've started to use to retrieve the image related to a YouTube live stream episode; this requires the YouTube video ID. The relevant part of this getepisodeimage
script currently looks like this:
# Requires YouTube video ID
readonly id=$1
if [ -z "$id" ]; then
echo "Usage: $scriptname <YOUTUBE VIDEO ID>"
exit 1
fi
(And yes, before you say it, I could have saved myself the execution of a binary by replacing the [ ... ]
with [[ ... ]]
. If you're curious, see The open square bracket [ is an executable.)
Anyway, rather than test for the emptiness of the value of the id
parameter (with -z
), in a standalone if ... fi
section, I could have used the following shell parameter expansion pattern:
${parameter:?word}
The description for this is as follows:
If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits. Otherwise, the value of parameter is substituted.
I've written about another shell parameter expansion pattern before (see Shell parameter expansion with :+ is useful) but I'd forgotten about this one.
Using this pattern, I can replace the code above with this:
readonly id=${1:?Usage: $scriptname <YOUTUBE VIDEO ID>}
Much more succinct - I like it!
]]>In fzf - the basics part 1 - layout I shared what I learned about controlling fzf
's layout. In the examples I showed, based on directories and files in the SAP TechEd 2020 Developer Keynote repository (which I'll use again in this post), fzf
presented a total of over 17000 items from which to make my choice.
That's a lot, and far more than I want to consider wading through.
In a pipeline context, fzf
will present choices given to it in that pipeline, i.e. via STDIN, like this:
; printf "one\ntwo\nthree" | fzf --layout=reverse --height=40%
>
3/3
> one
two
three
Interestingly, to copy/paste this example from my terminal, I had to (discover and) use the
--no-mouse
option from the Interface category so that the mouse was free to use and not locked tofzf
during that moment.
But I want to think about using fzf
in a pipeline at another time; right now I'm just digging into options where fzf
is used without receiving anything on STDIN.
So what does fzf
do if it's not fed anything to display via STDIN? Well, the README states that unless otherwise directed, fzf
uses the find
command to build the list of items. The actual sentence in the Usage section reads as follows:
"Without STDIN pipe, fzf will use find command to fetch the list of files excluding hidden ones."
At first, I stopped reading after "fzf will use find command to fetch the list of files", and missed the "excluding hidden ones".
That careless omission did cause me a pleasant coffee length digression into the nuances of basic uses of the find
command. I created a set of test files and directories like this, some hidden, some not, as you can see:
.
āāā Fruit
ā āāā apple
ā āāā banana
ā āāā cherry
ā āāā .damson
āāā .Trees
ā āāā ash
ā āāā birch
āāā aardvark
āāā badger
āāā .cow
If pressed, I'd say that I'd naturally use the following incantation as a basic way to find files and directories: find . -type f
. The results are interesting.
In all the following examples, I'm in the directory denoted by
.
at the top of the tree as shown above. The;
is my simple prompt (inspired by Kate), with my directory location shown in a line above that (# /tmp/testdir
).
# /tmp/testdir
; find . -type f
./Fruit/apple
./Fruit/cherry
./Fruit/.damson
./Fruit/banana
./.cow
./aardvark
./.Trees/birch
./.Trees/ash
./badger
# /tmp/testdir
;
(9 entries)
I'd always considered that the "default" behaviour, but on reflection, it's arguably not default, as I'm using something specific (.
) as the first argument to find
, whereas I could just as easily have used *
, thus:
# /tmp/testdir
; find * -type f
Fruit/apple
Fruit/cherry
Fruit/.damson
Fruit/banana
aardvark
badger
# /tmp/testdir
;
(6 entries)
That's quite a difference! The Stack Overflow entry Difference between find . and find * in unix confirms that difference.
.
results in everything in .
being found, including the hidden file .cow
and the hidden directory (and its contents) .Trees/
*
results in only the "visible" content in .
being returned; note that this visibility difference only applies to the starting directory in question - as Fruit/.damson
was reported even though .cow
and .Trees/
weren'tSo I wonder if either of these two incantations are what fzf
uses by default. Let's see what fzf
gives, in this same starting directory:
# /tmp/testdir
; fzf --height=40% --reverse
>
5/5
> Fruit/apple
Fruit/cherry
Fruit/banana
aardvark
badger
(5 entries)
Nope!
Of course, there's that "excluding hidden ones" phrase from the README to which I must now pay attention. What I need is to tell find
explicitly to exclude hidden files and directories. This will do the trick:
# /tmp/testdir
; find . -type f -not -path '*/\.*'
./Fruit/apple
./Fruit/cherry
./Fruit/banana
./aardvark
./badger
# /tmp/testdir
;
(5 entries)
That's more like it! In fact, because we're explicitly excluding content based on a pattern, the same results are forthcoming whether we use a .
or *
as the first argument to find
. Here's what we get with a *
:
# /tmp/testdir
; find * -type f -not -path '*/\.*'
Fruit/apple
Fruit/cherry
Fruit/banana
aardvark
badger
# /tmp/testdir
;
(5 entries)
OK, there is a subtle difference, in that in this latter case, the ./
prefix is not included in the output of each entry. This is closest to what we see with fzf
too.
So if I wanted fzf
to actually show me hidden files, how would I do that? Well of course one way would be to run the appropriate find
command and then pipe the output into fzf
, like this:
# /tmp/testdir
; find . -type f | fzf --height=40% --reverse
>
9/9
> ./Fruit/apple
./Fruit/cherry
./Fruit/.damson
./Fruit/banana
./.cow
./aardvark
./.Trees/birch
./.Trees/ash
./badger
But I want to leave the pipeline approach until another time. Can I influence fzf
's search behaviour when, as the README puts it, "input is [the] tty"?
The answer is yes and is in the form of the environment variable FZF_DEFAULT_COMMAND
. If set, fzf
will use its value as the command to execute to find the files to display. So instead of using the pipeline above, I could do this:
# /tmp/testdir
; export FZF_DEFAULT_COMMAND='find . -type f'
# /tmp/testdir
; fzf --height=40% --reverse
>
9/9
> ./Fruit/apple
./Fruit/cherry
./Fruit/.damson
./Fruit/banana
./.cow
./aardvark
./.Trees/birch
./.Trees/ash
./badger
Nice - now fzf
shows me hidden files.
If we can modify what fzf
uses to find files, we can go further, as the README suggests, and use another utility entirely, as described in the README's Tips section (and hinted at also in the Environment variables section).
I've installed the search utility ripgrep, known as rg
, as it works for me in a more natural DWIM (Do What I Mean) mode.
Let's see what rg
will do for us with the same content. It is as much like grep
than find
and so we need to tell it to search at the file level, with --files
, for the purposes of this exploration:
# /tmp/testdir
; rg --files
badger
aardvark
Fruit/banana
Fruit/cherry
Fruit/apple
# /tmp/testdir
;
(5 entries)
rg
won't consider hidden files and directories unless told to explicitly with --hidden
:
# /tmp/testdir
; rg --files --hidden
badger
.Trees/ash
.Trees/birch
aardvark
.cow
Fruit/banana
Fruit/.damson
Fruit/cherry
Fruit/apple
# /tmp/testdir
;
(9 entries)
At this level, rg
delivers results similar to what we already get with find
.
Where rg
comes into its own, DWIM-like, is when the search in question is within a git repository. In that case, it will respect what you have in your .gitignore
file.
I was curious to see this in action in the context of the simple set of files above. I added a .gitignore
file in /tmp/testdir
containing a single entry (Fruit
) and then ran both find . -type f -not -path '*/\.*'
and rg --files
:
# /tmp/testdir
; cat .gitignore
Fruit
# /tmp/testdir
; find . -type f -not -path '*/\.*'
./Fruit/apple
./Fruit/cherry
./Fruit/banana
./aardvark
./badger
# /tmp/testdir
; rg --files
badger
aardvark
Fruit/banana
Fruit/cherry
Fruit/apple
# /tmp/testdir
;
Hmm, so what's going on here? They both produce the same list of files, despite the presence of the .gitignore
file and its contents.
Turns out that it will only respect .gitignore
in the context of an actual git repository, which makes sense. So a quick git init
in the directory later, and we now see a different result for rg --files
:
# /tmp/testdir
; git init
Initialized empty Git repository in /private/tmp/testdir/.git/
# /tmp/testdir (master #%)
; rg --files
badger
aardvark
# /tmp/testdir (master #%)
;
That's more like it - the Fruit/
directory and its contents are ignored.
Moving back to the repository content that I have been using to explore fzf
in more depth (especially in fzf - the basics part 1 - layout), let's see what effect rg
's respect for .gitignore
has on the results in this more realistic scenario.
First, what does the incantation of find
that most closely resembles fzf
's default behaviour give us from the top level of that repository?
# /tmp/teched2020-developer-keynote (main *=)
; find . -type f -not -path '*/\.*' | wc -l
17688
# /tmp/teched2020-developer-keynote (main *=)
;
OK, so that's what we got in the previous post. The repository has a .gitignore
file:
# /tmp/teched2020-developer-keynote (main *=)
; cat .gitignore
node_modules/
*.swp
sk*.json
default-env.json
.DS_Store
dashboard.zip
mta_archives/
ui/resources
*.db-journal
*.token
kubeconfig.*
# /tmp/teched2020-developer-keynote (main *=)
;
So let's see what rg
gives us:
# /tmp/teched2020-developer-keynote (main *=)
; rg --files | wc -l
163
# /tmp/teched2020-developer-keynote (main *=)
;
That is certainly a huge difference, mostly a result of ignoring a load of stuff - not least in the various node_modules/
directories within the repository.
Now that the list of choices is more manageable, I can now start to think about what it actually contains, and what it doesn't contain. There are hidden files in the repository that I actually want to be able to select. fzf
's default behaviour is preventing that from happening, but it's only now that my head is clear enough to address this (looking through a list of 17000+ files fogged my thinking).
So I remember I can use the --hidden
option with rg
; let's try that:
# /tmp/teched2020-developer-keynote (main *=)
; rg --files --hidden | wc -l
209
# /tmp/teched2020-developer-keynote (main *=)
;
OK, so a few more than the 163 that rg --files
returned. Good stuff. But what are those extra hidden files? Let's take a look, using a regular expression to reduce the output to entries where there's a .
either at the start of the line or following a /
:
# /tmp/teched2020-developer-keynote (main *=)
; rg --files --hidden | grep -E '(^|\/)\.' | sort
.abapgit.xml
.git/HEAD
.git/config
.git/description
.git/hooks/applypatch-msg.sample
.git/hooks/commit-msg.sample
.git/hooks/fsmonitor-watchman.sample
.git/hooks/post-update.sample
.git/hooks/pre-applypatch.sample
.git/hooks/pre-commit.sample
.git/hooks/pre-merge-commit.sample
.git/hooks/pre-push.sample
.git/hooks/pre-rebase.sample
.git/hooks/pre-receive.sample
.git/hooks/prepare-commit-msg.sample
.git/hooks/update.sample
.git/index
.git/info/exclude
.git/logs/HEAD
.git/logs/refs/heads/main
.git/logs/refs/remotes/origin/HEAD
.git/objects/pack/pack-8933b87ef40a05f8e4974179d6b7288c4cbb0a39.idx
.git/objects/pack/pack-8933b87ef40a05f8e4974179d6b7288c4cbb0a39.pack
.git/packed-refs
.git/refs/heads/main
.git/refs/remotes/origin/HEAD
.github/workflows/image-build-and-publish.yml
.github/workflows/out-of-office.yml
.gitignore
.reuse/dep5
cap/brain/.cdsrc.json
cap/brain/.dockerignore
cap/brain/.eslintrc
cap/brain/.gitignore
cap/brain/.prettierignore
cap/brain/.prettierrc.json
cap/brain/.vscode/extensions.json
cap/brain/.vscode/launch.json
cap/brain/.vscode/settings.json
cap/brain/.vscode/tasks.json
converter/.dockerignore
rapreceiver/.gitignore
s4hana/sandbox/.gitignore
s4hana/sandbox/router/.dockerignore
s4hana/sandbox/router/.prettierignore
s4hana/sandbox/router/.prettierrc.json
# /tmp/teched2020-developer-keynote (main *=)
;
That's nice - I can see important hidden files such as .abapgit.xml
, cap/brain/.dockerignore
and github/workflows/image-build-and-publish.yml
now.
However, the presence of all those files in the .git/
directory are clouding that overview. Let's get rid of those with rg
's --glob
option, with which one can include, or (using a !
to negate things) exclude results:
# /tmp/teched2020-developer-keynote (main *=)
; rg --files --hidden --glob '!.git/' | wc -l
184
# /tmp/teched2020-developer-keynote (main *=)
;
Let's see what makes up the list of hidden files now:
# /tmp/teched2020-developer-keynote (main *=)
; rg --files --hidden --glob '!.git/' | grep -E '(^|\/)\.' | sort
.abapgit.xml
.github/workflows/image-build-and-publish.yml
.github/workflows/out-of-office.yml
.gitignore
.reuse/dep5
cap/brain/.cdsrc.json
cap/brain/.dockerignore
cap/brain/.eslintrc
cap/brain/.gitignore
cap/brain/.prettierignore
cap/brain/.prettierrc.json
cap/brain/.vscode/extensions.json
cap/brain/.vscode/launch.json
cap/brain/.vscode/settings.json
cap/brain/.vscode/tasks.json
converter/.dockerignore
rapreceiver/.gitignore
s4hana/sandbox/.gitignore
s4hana/sandbox/router/.dockerignore
s4hana/sandbox/router/.prettierignore
s4hana/sandbox/router/.prettierrc.json
# /tmp/teched2020-developer-keynote (main *=)
;
Now we're talking! That looks like the level of results that will work for me generally. So I can now add that glob exclusion to the value for FZF_DEFAULT_COMMAND
like this:
# /tmp/teched2020-developer-keynote (main *=)
; export FZF_DEFAULT_COMMAND='rg --files --hidden --glob '"'"'!.git/'"'"
# /tmp/teched2020-developer-keynote (main *=)
;
The
"'"
sequences are to supply single quotes in an otherwise single-quoted string.
This can be seen in my Bash configuration script for fzf
.
Now I've customised exactly which type of entries I want to be included (and excluded) in the search results that fzf
presents to me in a tty context, I'm happy:
# /tmp/teched2020-developer-keynote (main *=)
; fzf --height=40% --reverse
>
184/184
> enabling-workflows.md
message-bus-settings.sh
.gitignore
README.md
quickstart.md
.reuse/dep5
images/whiteboard.jpg
images/enable-kyma.png
images/enabling-workflows.png
images/split-terminals.png
kymaruntime/README.md
mock-converter/index.js
storyboard.md
.abapgit.xml
mock-converter/package.json
abap/README.md
Far easier to deal with (than the 17000+ files previously) but nothing important omitted.
Turns out that FZF_DEFAULT_COMMAND
is useful, and it's also not the only environment variable that fzf
sports. I'll look into others in the next post.
Just now my good friend Ronnie Sletta drew our attention to a question by Chris Roberts on video content: "Iām thinking of making some free courses and putting them on YouTube. Do you prefer a series of short videos, or one long video? If a series, should I release one a week? If one long video, how long?".
I started to reply on Twitter, then found myself needing to use the "1/n" tweet thread approach, which I've never really liked, so I thought I'd take a leaf out of Scott Hanselman's book and reply once in a form that's arguably more permanent and easier to read.
Form variation
First off, I think it's important for us to remember that folks have different preferences when it comes to consuming content and learning. Some prefer the written word, and some prefer video instruction or demonstration. But it goes deeper than that.
Descend into the video category and within that there are some that like to settle in and watch a "feature length" video. Others prefer short, sharp and to-the-point videos, the moving picture equivalent of good answers on Stack Overflow. Then there's the question of time available, which also factors into the decision on which videos to watch. Often I find myself thinking:
"OK, I've got 15 mins before the next meeting, and I'd like to learn more about X - what videos fit that combination of time and topic filter?"
It seems obvious to me that when it comes to the time filter, shorter time slots are going to be more common, so in one sense shorter videos in general seem like a good idea.
But there are cases to be made for both short and long form videos. They serve different purposes and contexts.
Long form
Before I proceed with this thought, let me be clear on what I mean by "long form". There are recordings of live streams from YouTube or Twitch sessions that are very often far more than an hour or so. A live stream that is just one hour is actually quite unusual.
For me though, an "hour" is long form. Anything longer than that is beyond the question at hand, and for me fits into the "I want to watch my favourite streamer and relax, so I'll watch this recording as they're not live right now, or because it covers something I'm interested in" category.
It's worth pointing out at this point that the main live stream episodes that I put out are deliberately limited to one hour. That's for many reasons, here's the main one:
This applies to me too - I have other tasks to accomplish and meetings to attend in my working day.
With this in mind, I'd say that the "long form" categorisation of one hour videos only applies if they're not live. I'd almost go so far as to say that a live stream of less than an hour has more disadvantages than advantages:
Video chapters
While short form videos are great for focused search, confirmation and consumption, there's a key feature that's essential for making longer form videos more easily consumable and more useful, and that's the video chapters feature.
I think they go some considerable way towards bringing the "long form" videos closer to "short form" in consumability and relevance.
I've been making use of video chapters for a while, and I can thoroughly recommend it.
Some examples
I use video chapters in my live stream recordings; after a live stream is finished, I scrub through the recording and then add video chapter information in the description, making it much more accessible and useful for consumers. Here's an example from an episode of the Getting the most out of the SAP TechEd Developer Keynote repository series on the Hands-on SAP Dev show*.
*See An overview of SAP Developers video content for more details.
My other SAP Developer Advocate colleagues use video chapters too.
Talking of SAP Developers video content, we've just launched the first video in a new shorter form show - SAP Tech Bytes. The videos here are deliberately short, to be more consumable in a shorter amount of time, to be focused on one specific topic, and also to provide form variation.
Here's the first episode: SAP Tech Bytes: Tutorial - Create SAP HANA Database Project. Note that there are video chapters even in this shorter form content.
In fact, we use video chapters in even our shortest form videos - the episodes of the SAP Developer News show, where each video is only around five minutes long - deliberately a coffee break length.
Frequency and schedule
I've just got to here and realised I haven't even talked about frequency. I thought it might be at least helpful to give some examples; I've been live streaming in my role as SAP Developer Advocate since January 2019, and have kept more or less the same frequency since then, which is weekly. I've done the occasional second live stream in a single week, but I treat (and refer to) those as "off piste" and not really part of the usual cadence.
Perhaps more importantly than the frequency, at least for live streams, is the consistency of day and time, i.e. the schedule. I think of my episodes of Hands-on SAP Dev as episodes of a TV programme, and again, because it's broadcast live, the best way to help folks not to miss it is to be consistent and predictable with the schedule. That's why I broadcast my episodes on Fridays, at 0800 GMT.
Incidentally, I chose a relatively early morning slot because that's when I'm most awake and my brain is buzzing - I'm a morning person and the plate spinning that's required to stream live is slightly less difficult then.
I think the same scheduling rules apply to YouTube Premieres too. Premieres are videos that you can pre-record but have the first broadcast set for a fixed date and time in the future, with all the build up and excitement of a live stream, but during playback you can attend and interact in the chat with the viewers. Sort of a combination of live stream and recording, which can work really well.
When it comes to recorded videos, i.e. neither live streams nor premieres, then the schedule is not that important, so it just comes down to frequency. And that really depends on two factors:
There's a balance you need to find between these two factors, and there's no algorithm I know of that will provide a solid answer here; it really depends on how you work, what you have to share, and so on.
That said, here's a general piece of advice: If you have a load of pre-recorded videos waiting to upload to YouTube, don't be tempted to just publish them all at once. Resist the urge to flood your viewers' brains with all that wonderful goodness, and instead publish them in a spaced-out fashion instead. That has two advantages:
Wrapping up
Anyway, this post is already far longer than I expected it to be; I'll bring it to a close now, but as it's just blog post content, I may come back to it in the future and update it as I see fit. That's the wonderful nature of blog posts and how they're still the backbone of many communities.
Happy videoing!
]]>In the context of doing less and doing it better I decided to start learning more about fzf
, the "command line fuzzy finder". Learning more wasn't difficult, because despite using it for quite a while, I've never really read any of the documentation, and have thus only scratched its surface.
So I started with the first part of the main README, and here's what I found.
The examples I give in this post are taken from a directory and file structure reflecting the SAP TechEd 2020 Developer Keynote repository, which has multiple directories and subdirectories, lots of files with different extensions, hidden files and directories (and I'm not just talking about the .git/
directory) and also stuff that we often want excluded, such as any node_modules/
directories. Fairly representative and useful for illustration.
An awareness of new and changed features
I was using an older version of fzf
because I hadn't upgraded it; a quick brew update; brew upgrade fzf
later and I was using the latest release. Not absolutely essential for me, but doing this makes me more aware of the fact that fixes and features do come along, and also exposes me to options that I might not have known about. So an indirect but useful advantage already.
I'd already installed the keybindings keybindings (to get fzf
to react on Ctrl-T and Ctrl-C) so that was still OK.
Basic navigation
I'd been using the arrow keys to move up and down in the list that fzf
presents. The shame of it! Now I've learned that I can use Vim style key bindings to move up (Ctrl-J) and down (Ctrl-K) I feel less unclean. There's even support for the mousewheel, but the less said about that the better.
Anyway, it's time to get to the main topic of this post - how to affect fzf
's appearance, or layout.
Layout
Out of the box, fzf
will use the entire height of your current terminal to display the choices, and this is more or less how I've mostly used fzf
thus far:
But it doesn't have to be this way; in the Layout category of options there's --height
with which you can tell fzf
only to use a certain percentage of the terminal's height.
Moreover, the jump from the line I was on when fzf
was invoked, down to the bottom of the screen where I was to make my choice, was a little jarring for me.
I'd vaguely (but mistakenly) thought that the --layout=reverse
option, also in the Layout category, was something to do with the sort order of the choices presented. Turns out that the order can be reversed with the --tac
option (taken from the name of tac
, a command independent of fzf
, whose name is the opposite of cat
, see?) and that the --layout=reverse
relates to the general presentation of the choices.
So with --layout=reverse
I can reduce that jarring by having the place where I make my choice at the top of the list rather than at the bottom, like this:
There's a couple of other options that I found that made the appearance even better for me, but these are more subjective.
First, I can get a border around everything with the Layout option --border
. In fact there are multiple values that can be specified for this option; the default is to make a rounded border around all four sides.
Then I can save a bit of space by specifying the value inline
for the --info
option, also in the Layout category, to get the stats displayed on the same line as my input.
Here's both of those options in action:
In this and the next asciicast, the right hand edge of the border is not properly displayed or reproduced in asciinema for some reason; just imagine that the options are nicely boxed all the way around.
Before we leave the Layout category, there's a couple of other options that can give a nice effect, especially for making dmenu
or rofi
style popup menus. These are --margin
and --padding
. I've found that to get a popup menu effect, it's worth leaving off the height option (--height
) to get full screen:
Next time
That's about it for what I've learned about controlling the appearance. There's actually plenty more on this in the wiki.
Note right now that there are 17688 entries in the list of choices presented to me. That's a lot, and far more than I'd ever actually want to select from. Next time I'll take a look at a couple of fzf
environment variables, one of which controls what command fzf
uses, and how that can be changed to affect what gets displayed (or not displayed) in the choice list, so I can address that large number of entries issue.
Update: part 2 of this series is now available: fzf - the basics part 2 - search results.
]]>In October last year Samir Talwar tweeted something simple yet profound: "Do less, and do it better".
In my work and play I discover and start using various tools and technologies. The pace of change in this industry, coupled with the (not unpleasant) demands on what I have to produce, means that I often end up with only a shallow understanding of things. And sometimes these are things I use every day.
The nature of my job as a developer advocate (but I think this extends to development in general), in the context of that fast pace of change, means that there's always something new to learn, to adopt, and to incorporate into a workflow, process or solution. But that can come at a price - of limited comprehension and mastery.
To explain further, I'm going to stretch a metaphor relating to ploughing a field and sowing seeds.
Ploughing and sowing
As an individual, I sometimes feel as though I'm trying to prepare a large field and plant seeds there using a poorly hand-constructed and inefficient plough made of the wrong sort of wood and bits of string, combined with a seed drill made out of old toilet rolls and sticky tape. Not only that, but I'm trying to plant across the entire field, 50 furrows wide, as I move along.
Needless to say, the ploughing doesn't go very well, and the seeds are planted imprecisely, sometimes superficially, mostly wastefully, resulting in poor distribution, low growth and high energy expenditure.
But if I were to abandon the idea of going wide, and instead go narrow, focusing on just a handful of furrows, I could afford to take the time to correctly plant each seed, nurturing & watering each one, producing strong plants with deep roots and healthy growth.
I've thought this for a while but never got round to doing anything about it. Samir's tweet has galvanised me into spending some time working out what that means for me.
Consolidating
So this year I'm attempting to "do less, and do it better" by acknowledging the tools I use day in day out, and learn more about them, restricting myself to a narrow set of topics, move a step closer towards mastery in each, and really benefit from everything they have to offer.
Here's an example from this weekend; I read the entirety of the main README for the excellent fuzzy-finder tool fzf
, all 16 pages. That might seem ridiculous to say (16 pages is not a lot) but I've used fzf
for a year or so and never RTFM'd before. In my defence, I've also been constantly and painfully aware that I've merely scratched the surface. I've now discovered some fzf
gems that I can put into practice immediately, and some areas that I need to dig into more.
Likewise for other tools that I use, tools that are not only essential, but which, when mastered, can make my workflows even better. I'm thinking of Vim (I've recently started watching my friend and colleague David Kunz's DevOnDuty series, which I can strongly recommend), tmux
(rwxrob is a great practitioner, and I should re-read Brian P. Hogan's great book on tmux too) and of course the environment and language that ties it all together for me - Bash.
The lockdown has afforded me time to read more, and I need to embrace that and work out how I can keep that momentum up. I want to tip the balance over from always having my fingers on the keyboard towards stepping away from the keyboard to read, reflect and consolidate my learning.
Update 02 Feb 2021: I've started digging deeper into fzf
- see fzf - the basics part 1 - layout and fzf - the basics part 2 - search results over on my Autodidactics blog.
(Jump to the end for a couple of updates, thanks gioele and oh5nxo!)
I'm organising my GitHub repositories locally by creating a directory structure representing the different GitHub servers that I use and the orgs and users that I have access to, with symbolic links at the ends of these structures pointing to where I've cloned the actual repositories.
Here's an example of what I started out with:
; find ~/gh -type l
/Users/dja/gh/github.tools.sap/developer-relations/advocates-team-general
/Users/dja/gh/github.com/SAP-samples/teched2020-developer-keynote
/Users/dja/gh/github.com/qmacro-org/auto-tweeter
and what I wanted to end up with (you can see the invocation of the script here too):
; find ~/gh -type l | awk -F/ -vCOLS=5,6,7 -f ~/.dotfiles/scripts/cols.awk
github.tools.sap developer-relations advocates-team-general
github.com SAP-samples teched2020-developer-keynote
github.com qmacro-org auto-tweeter
In other words, I wanted to select columns from the output and have them printed neatly and aligned. Don't ask me why, I guess it's just some form of OCD.
Anyway, I decided to write this in AWK, partly because I don't know AWK that well, but mostly as a meditation on the early days of Unix and a homage to Brian Kernighan. Talking of homages, I've also decided to share this script by describing it line by line, in homage to Randal L Schwartz, that maverick hero that I learned a great deal from in the Perl world.
Randal wrote columns for magazines, each time listing and describing a Perl script he'd written, line by line. I learned so much from Randal and enjoyed the format, so I thought I'd reproduce it here.
Let's start with the script, in full, courtesy of GitHub's embeddable Gist mechanism, which, incidentally, I created from the command line using GitHub's CLI gh
, like this:
; gh gist create --public scripts/cols.awk
I subsequently edited it too (there are now multiple revisions) with:
; gh gist edit c84f5a17dc4740dc2defa6a913cd3c2c
OK, so here's the entire script.
Remember that an AWK scripts are generally data driven, in that you describe patterns and then what to do when those patterns are matched. This is described nicely in the Getting Started with awk
section of the GNU AWK manual. The approach is <pattern> <action>, where the actions are within a {...}
block. In this script, there are two special (and common) patterns used: BEGIN
and END
, i.e. before and after all lines have been processed. There's an <action> block in the middle which has no pattern; that means it's called for each and every line in the input. There's also an <action> block with a specific pattern, which we'll look at shortly.
The invocation
Note the invocation earlier looks like this:
awk -F/ -vCOLS=5,6,7 -f ~/.dotfiles/scripts/cols.awk
Here are what the options do:
-F/
says that the input field separator is the /
character-vCOLS=5,6,7
sets the value 5,6,7
for the variable COLS
-f <script>
tells AWK where to find the scriptOK, let's start digging in.
The BEGIN
pattern
Lines 7-9 just make sure that the optional GAP
variable, if not explicitly set (using a -v
option in the invocation) is set to 1. That's how many spaces we want between each column. If we had wanted a value other than the default here, an extra option like this would be required, for example -vGAP=2
.
The NR == 1
pattern
The action in this block is executed only on one occasion - when the value of NR
is 1
.
NR
is a special AWK variable that represents the record number, i.e. the value is 1
for the first record, 2
for the second, and so on. Note that there's also FNR
(file record number) which comes in handy when you're processing multiple input files. So the <action> block related to this NR == 1
pattern is only executed once, when processing the first record in the input.
This <action> block, specifically lines 18-24, deal with the value for the COLS
variable. If it's been set (as in our invocation: -vCOLS=5,6,7
) it splits out the column numbers (5,6 and 7 here) into an array fieldlist
. If it's not been set, then the default should be all columns, which are put into the fieldlist
array using the loop in lines 21-23. Note that NF
is another special variable, the value of which tells us the number of fields in the current record.
The default pattern
Lines 31-36 represent the action for the default pattern, i.e. this is executed for each line in the input. That includes even the first record, although we've done some processing for the first record in the <action> block for the NR == 1
pattern already. That's because all patterns are tested, in sequence, unless an action invokes an explicit next
to skip to the next input record (see update #2 at the end of this post for the attribution for this info).
The script has to work out what the longest word in each column is, and for that it needs to read through the entire input. I think perhaps there may be better ways of doing this, but here's what I did.
Because this script needs two passes over the input, we store the current record in an array called records
in line 32. Worthy of note here is that each field in a record is represented by its positional variable i.e. $1
, $2
, and so on, and $0
represents the entire record. In lines 33-35 we build up an array fieldlengths
of the longest field by position. Arguably we only really need to remember the longest lengths of the fields in fieldlist
, but hey.
The END
pattern
Lines 40-49 represent the action for the special END
pattern, i.e. once the records have been processed (once). At this stage we have the longest lengths for each of the fields (columns), and now we just need to go through the input again, which we have in the records
array.
In line 42 we use the split
function to split out the record we're processing into an array called fields
:
split(records[record], fields, FS)
The third argument supplied to this call is FS
, which is another special variable representing the field separator for this execution. Remember the -F/
option in the invocation, shown earlier? In this case, the value of FS
is also therefore /
. If the field separator is different (the default is whitespace) then the value of FS
will be different too.
Then in lines 43-46 we start printing out each chosen field (remember, the chosen ones are in fieldlist
). The printf
call in line 45 is special, let's break that down here:
printf "%*-s", fieldlengths[f] + GAP, fields[f]
Like other flavours of printf
, this one also takes a pattern and one or more variables to substitute into that pattern. The pattern here is for a single variable, and is %*-s
. This means that the variable to print is a string (basic form is %s
), which should be padded out, left justified (-
) by a value also to be supplied as a variable (*
).
So we need to supply two variables, the width to which the variable value should be padded, and the variable itself. And that's what is supplied. First, we have fieldlengths[f] + GAP
, which works out to be the longest length for that field (column), plus zero or more spaces as defined in GAP
. Then we have the variable that we want printed, i.e. fields[f]
.
Noting that printf
won't print a newline unless it's explicitly given (as \n
), this works well because then the consecutive fields are printed on the same line. Line 47 takes care of printing a newline when all the fields are output for that record.
And that's it. As the tagline for this blog says, I reserve the right to be wrong. I'm not a proficient AWK scripter, but this works for me.
Happy scripting!
Update #1, later the same day: Over on Lobsters, the user gioele contributed a pipeline version, which also helps me in a different area (small pieces loosely joined) of the same Unix meditation: find ~/gh -type | cut -d/ -f5,6,7 | column -s/ -t
. Thanks gioele!
Update #2, even later the same day: Over on Reddit, the user oh5nxo puts me right; in an earlier version of this script (and this blog post) I'd put the lines of code that are now in the NR == 1
<action> block inside the main (default) <action> block, as I'd mistakenly thought that I'd have to otherwise repeat some code. That wasn't the case. Thanks for sharing your knowledge, oh5nxo! I've updated the script and this post to reflect that.
I was browsing the source code of the main script in the bash-http-monitoring project that had been shared on a social news site recently. The general idea was that it fired off a number of background web requests to run in parallel and eventually produce a report on the availability of various websites. Nice, neat and simple.
In the main part of the project's srvmon
script, I saw this:
# Do the checks parallel
for key in "${!urls[@]}"
do
value=${urls[$key]}
if [[ "$(jobs | wc -l)" -ge ${maxConcurrentCurls} ]] ; then # run 12 curl commands at max parallel
wait -n
fi
doRequest "$key" "$value" &
done
wait
I noticed the use of wait
in those two places and was intrigued; although I could guess what it did, I wanted to learn more. On digging in a little, and reflecting on it, it struck me that wait
helps me understand better the origins of shell scripting and why it seems to be often misunderstood.
The wait
builtin in action
First, what is wait
? Well, it's (usually a) builtin, i.e. a command that is built in to the shell executable itself, rather than existing as a separate program. The headline description is that wait
"waits for job completion and returns the exit status". The Wikipedia entry for it notes that it's a builtin because it "needs to be aware of the job table of the current shell execution environment", which makes sense, given its purpose.
While the above snippet of code gives a couple of examples, I thought I'd spend a coffee writing a little exploratory script called jobwait
to feel how wait
can work. Here it is:
#!/usr/bin/env bash
log() {
echo "$(date +%H:%M:%S) $*"
}
createjob() {
local time=$1
local message=$2
(sleep "$time" && log "$message") &
log "created job '$message' (${time}s) PID=$!"
}
main() {
createjob 10 medium
createjob 15 long
createjob 5 short
log "jobs created"
wait -n && log "a job has finished"
wait && log "all jobs have finished"
}
main "$@"
Running this script produced the following output - note the times on each of the log records, which shows when each log record was issued:
; ./jobwait
09:03:11 created job 'medium' (10s) PID=72679
09:03:11 created job 'long' (15s) PID=72682
09:03:11 created job 'short' (5s) PID=72685
09:03:11 jobs created
09:03:16 short
09:03:16 a job has finished
09:03:21 medium
09:03:26 long
09:03:26 all jobs have finished
;
Now there's nothing unexpected about this; nevertheless, it was quite satisfying seeing things happen in the order that they did. Note that wait
returns the job exit status too, and with the use of &&
I'm ignoring that here at my peril, but it's only a test script.
The -n
option makes wait
wait for the next job to terminate, whatever that job is. So here we see that the "a job has finished" log entry is issued as soon as one of the jobs terminates - the 'short' one, in this case.
The shell as a command environment
Now we know what wait
can do, I'd like to think a little bit about what it represents, too.
Recently my learning radar has been picking up various conversations where it seemed to me that people were misunderstanding what shell scripting is. It also came up this month in a Lobster thread, where the user "pm" really helped me put my finger on what is frustrating about the "Bash vs a real programming language" discussion.
The shell is like a REPL to your operating system, an interactive environment where you can have a conversation with it - manage resources, execute programs and so on. In that sense, the language of that conversation needs to be simple and have minimal noise. You want to just type something in and have it happen.
Moreover, you want to specify values with as little fuss as possible. Run a program that operates on a word, or a list of words, or a file or list of files - you don't want to be messing around with having to quote those things in the basic case. And the facilities that the REPL provides to enable you to take full advantage of the resources and programs you're working with, are super important. I'm thinking of the Unix pipeline, and IO redirection as two great examples of that.
That reference to Unix reminds me of a wonderful paper written in 1976 by one of Unix's fathers, Ken Thompson. It's THE UNIX COMMAND LANGUAGE which is available via the Internet Archive but has also been made more consumable in different formats in this lovely repository too. This paper is purportedly the first ever written about the Unix shell, and is a great read. It has a beautifully simple introduction to subshells, pipelines and IO redirection too.
Perhaps more subtly, what we know as the source for shell scripting today is referred to in the paper's title as a "command language", and that's what it is. There is much in the paper that is quoteworthy, but I'll pick just one here that helps me think about what the shell (and, by implication, its language) is:
"The Shell, and the commands it executes, form an expression language ... [which is] easily extensible"
So this REPL, our interface to the operating system and its resources, is a command environment and our direct interaction with it is via a command language that has been designed to express our intentions in a straighforward and as consistent a way as possible.
Here's another quote, from the section "THE SHELL AS A COMMAND":
"The Shell is just another command and by redirecting its standard input, it is possible to execute commands from files."
A natural progression to scripting
So it's at this point in this thinking journey that we start to transition from a REPL, where the interaction is direct ... to a collection of commands that can be saved in a file and passed to the shell, which I guess one could see as indirect interaction.
This of course is a move to scripting, as intentional collections of command language elements. And this is where wait
makes a lot of sense; perhaps it would be used interactively, but it seems more useful to me as a way of getting things to pause while other things complete, when in indirect mode ... in unattended command language execution mode. Scripting.
The transition from using the command language directly (including the syntax that allows us to join programs together in pipelines and manage input and output) to scripting, is in this way very subtle, and feels to me like a natural conclusion. And the features that make the command environment and its language so useful in the context of direct interaction in the REPL, are exactly those features that are available for scripting too.
To me, this is the essence of shell scripting, and explains why it is how it is. While it makes sense to write individual programs in whatever language one finds suitable -- while of course making sure those programs behave in predictable and useful ways in the context of the command environment, especially in relation to STDIN, STDOUT and STDERR -- it makes absolute no sense to me whatsoever to suggest that shell scripting itself should be replaced by "a modern language" (whatever that means).
To echo a (deliberately preposterous) concept mentioned in the Lobsters thread earlier, try replacing your shell with a "modern language" REPL such as Node.js's or Python's, and see how your productivity plummets. Try harnessing operating system resources, executing programs and filtering their output, or submitting background jobs (and wait
ing for them to complete before proceeding further) - and you'll soon come unstuck.
The shell is how it is for a reason. I'm happy with that.
]]>I was browsing a Superuser question and answer this morning and the code in the accepted answer looked like this:
set -- value1 value2 "value with spaces"
for a; do
shift
for b; do
printf "%s - %s\n" "$a" "$b"
done
done
I was somewhat confused by the rather short for
loop constructions here, and ended up looking it up in the looping constructs section of the Bash manual.
What looked odd to me was that there is no in <values>
part to either of the for
loops. I am used to seeing (and writing) for var in x y z
or similar. So what were these loop constructions iterating over? Well, the Bash manual section says this (emphasis mine):
for name [ [in [words ā¦] ] ; ] do commands; done
Expand words (see Shell Expansions), and execute commands once for each member in the resultant list, with name bound to the current member. If āin wordsā is not present, the for command executes the commands once for each positional parameter that is set, as if āin "$@"ā had been specified (see Special Parameters).
So these for
loops are processing the positional parameters in $1
, $2
and $3
which were set by the set
command on the first line, i.e. the values value1
, value2
, and value with spaces
respectively.
So there you go - it's sort of obvious now I think about it - what else would the loop constructs be processing? Anyway - onwards and upwards!
]]>At the end of Oct 2020 I ran a brief poll on Twitter, on which 82 people voted. Here's that poll, and the results. They're quite mixed, which at first might seem surprising. But there are reasons for that, as we'll find out.
Print working directory
The most popular option was "print working directory". At first sight it seems logical: "print out the current working directory, i.e. where I am right now". Moreover, the description in various versions of the manual for pwd
help to drive home that notion. Typically we see sentences like "print name of current/working directory" or "print the current directory".
But there are lots of commands that print stuff, and are described in that way too. Take the id
command. Here's what one man page says: "print real and effective user and group IDs". There's "print" again. But the command isn't pid
, it's id
. When you think about it, many, many commands in Unix send information to STDOUT, i.e. to the terminal. That's sort of the point of many of them.
This time arguably only superficially definitive, it would seem, the Wikipedia entry states, on the page for pwd
: "the pwd command (print working directory) writes the full pathname of the current working directory to the standard output". As if to underline the hopeful authority of this statement, there are five (!) footnotes that supposedly link to resources that back this up.
Unfortunately, the first footnote points to a Wayback Machine copy of the UNIX PROGRAMMERS MANUAL - Seventh Edition, Volume 1 - January, 1979, wherein there are actually zero references to pwd
being short for "print working directory":
I don't know about you, but this historic document carries more weight for me than other sources I've come across, and it only serves here to undermine the credibility of the Wikipedia entry.
The rest of the footnote links seem dubious at best, except for the one pointing to the GNU Coreutils manual on pwd which has it as "print working directory". But everything else I've seen so far makes me think that this is a misunderstanding that has spread for obvious and innocent reasons. In addition, the one footnote in the Wikipedia page that is not used to back this claim up is a pointer to The Open Group Base Specifications Issue 7, 2018 edition's information on pwd, which almost seems like it's actually avoiding using the word "print" at all: "return working directory name" ... "The pwd utility shall write to standard output an absolute pathname of the current working directory, which does not contain the filenames dot or dot-dot.". Very specific, very not-print.
So I'm thinking that "print working directory" isn't what pwd
stands for. In fact, "print working directory" may be common to some man pages, but on this macOS machine, with its BSD heritage, we have, instead: "pwd -- return working directory name". Moreover, it goes on to say "The pwd utility writes the absolute pathname of the current working directory to the standard output".
Pathname of working directory
So perhaps it really is "pathname of working directory". That would, at least to me, make more sense. Not only does it eschew the redundancy of "print", it also is more specific about the output - if I'm in /home/dja/
for example, then invoking pwd will tell me that, i.e. where I am, including the whole path, and not just dja
:
$ pwd
/home/dja
Process working directory
As for the other options, I do favour "process working directory", mostly because it makes a lot of sense to me; every process in Unix has the concept of a current working directory, and that's exactly what I'm asking for when I'm in my shell process and enter pwd
- there's a part in the video Unix terminals and shells that explains this very well.
I'd love to be able to point to some old Unix sources that definitively explain the answer, but unfortunately that search has come up with very little - the pwd
source in both the 5th and 6th Editions of Unix shed no light on this whatsoever.
Present working directory
What about "present working directory"? Well, that option seems to have legs, in the form of the Korn shell. While one source implies that the answer might well be "pathname of current working directory", in that pwd
just emits the value of the $PWD
environment variable (and a variable called "print working directory" makes no sense at all) ... it would seem that in ksh-land, at least, "present working directory" is what pwd
represents. Take, for example, the ksh man page which states "PWD - The present working directory set by the cd command".
There's a ton of discussion, both direct and indirect, on this very question. Take for example these two entries in the Unix & Linux Stack Exchange forum: Etymology of $PWD and What is $PWD? (vs current working directory). Of course, perhaps the definitive answer will never be found, as computing history is nothing if not varied and prone to forking.
Multics and print_wdir
Talking of history, we could go further back to pre-Unix roots, in the form of Multics, which indirectly gave rise to Unix (originally "Unics"). In the list of Multics Commands, we see, nestled amongst other similarly named commands, something that jumps out at us:
print_mail (pm) display mail in a mailbox
print_messages (pm) display interactive messages in a mailbox
print_motd (pmotd) display message of the day (source)
print_proc_auth (ppa) display process's sensitivity level and compartments
print_request_types (prt) display list of I/O daemon request types
print_search_paths (psp) display search paths
print_search_rules (psr) display ready messages
print_wdir (pwd) display working directory
There's pwd
, and in fact, just like its sibling pmotd
, for example, which is short for print_motd
, it's short for print_wdir
. Now, given the context of the original poll being set to Unix and Linux, perhaps we must discount this information. But as someone who is fascinated with Unix history in general - how can I?
I guess there are few things to conclude. The history is rich and diverse, and maybe we'll never know for sure. Perhaps, in fact, the answer will depend on whom we ask. In the grand scheme of things, it doesn't really matter ... but to those who delight in minutiae, it's a fun topic worth exploring.
]]>I went to school in the late 1970s and early 1980s - the dawn of computing for everyone. My very first experience of computing was at a terminal connected to a timesharing minicomputer, rather than at the keyboard of one of the personal computers of the day.
There was an article in the 1979 edition of my school's magazine "The Hulmeian", written by our head of Mathematics Morris Loveland. It brings back many happy memories, and provides some insight into computing in the early days.
Below the article, I've included some pictures accompanied by brief descriptions.
COMPUTER UNIT
In 1974 the School purchased a single computer terminal, a TEXAS 733, and established the G.P.O. dial-up link to Salford University. This project, initially between School and the University, made available on-line computer time to us and later to other educational establishments. It proved to be a most useful and successful facility and reports have been given in this magazine of some of the work undertaken in the five years in which the link was used. The School had between ten and fifteen hours of on-line time each week, mainly during lunch hours and after school. Some time was available during teaching time and a large number of boys had experience of using the BASIC language to a large remote processor.
Computing was mainly organised for small groups or for individual users, although a certain amount of class teaching was undertaken. The limitations of a single terminal caused delays and frustration. Boys were foreced to wait to use the system and it was found to be extremely difficult to teach a class of thirty boys where the visual display was a single sheet of typed material. Salford University extended the computer facility to several other schools resulting in a considerable reduction in the on-line time available to us.
Early in 1978 it was decided to investigate the possibility of installing a complete on-site computer system at School. The searches took nearly a year and in that time a system which would satisfy the requirements of the School was determined. The financial aspects were agreed in November 1978 and the system was delivered in January 1979.
The computer which has been installed is a SYSTIME 3000 comprising a PDP11/34 processor with 196Kb of working memory, two 4.8Mb disc drives for data storage, three visual display terminals (one of which is used for system control), a Superterm paper printer and the necessary hardware to include the original Texas terminal into the system. Thus four terminals instead of one are available for use with no restriction on the time when a boy may use the computer.
The language used is BASIC PLUS which is a variant of the BASIC language used during the past five years. Very few problems have been experienced with this minor change of language and it is a most suitable language for teaching purposes. BASIC PLUS is interactive, that is, one which enables a two way 'conversation' between user and machine. If an error is made by the user, either in typing or in the logic of what is communicated to the machine, he is informed of that error immediately and can make the required corrections.
The processor and system control is housed in the careers room and is linked by cables running across the quadrangle and over the roof of the Science block to room 34. This room has been redesigned and redecorated to be a terminal room, housing at present the four computer terminals.
The system is a fairly standard computer package apart from one important modification. The signal from one visual display unit is taken and fed to a television monitor. Normally a single terminal is used by an individual or at most by a small group working on a particular project. The intention of having the signal from one terminal displayed on a large television monitor was to enable full classes of thirty boys to see a particular piece of computing. However, one monitor proved insufficient and by including a signal converter and amplifier the signal from one terminal can now be displayed on three domestic television sets. When a full class is taken into the terminal room teaching can be given to all by linking all four terminals together and displaying the data on them and on the three television sets. As far as is known this particular part of the system is an innovation as regards the teaching of computing in schools, particularly as part of the electronics required to convert the signals to be compatible with domestic television sets was designed and built in School.
The system is thus being used in two different ways: for individual and small group activity or with the system linked together for class teaching. So far no examination teaching has been undertaken and at present none is envisaged. The intention is to use the computer in the classroom as a tool to teach a computer language, which will enable boys to undertake projects on their own, and as an aid to enrich and extend the normal teaching of Mathematics. Boys will find that they are taken to the terminal room perhaps for a complete period or for only ten minotes of a Mathematics period during which some particular part of the subject matter being developed will be illustrated using the computer.
The system has been planned with future expansion in mind. When the wiring was installed two extra cables were taken into room 34; and therefore two more terminals can be added to the system fairly easily as and when they are required. Further expansion is possible; up to twenty-four terminals can be serviced by the processor! To achieve this additional memory will have to be added to the processor.
Following the delivery of the system in January 1979 and after all testing had been carried out by the suppliers the computer was in limited use in early March. Since then the number of users has increased considerably. The computer is available for general use from 8.00 am to 5.00 pm and is heavily used by boys from the first to the sixth form before morning school, during the lunch hour and after school. Considerable use has been made during teaching time for class sessions, and sixth formers are able to use the computer in their private study time. During the final three weeks of the summer term about nine hundred log-ins were recorded! As this period included the preparation for the Open Days when the system was out of general use, there appears to be a growing for the facility the computer now provides.
During the summer term three after-school courses were provided to each the BASIC language, two for juniors new to the system and one for those with considerable experience in computing. It is hoped that more of these courses will be provided for boys at all levels in the School in the coming years.
Looking back over the period of the installation of the computer and the enthusiasm it has generated with boys of all ages, I anticipate a growing demand for computer time and an enhancement of the teaching of Mathematics in the School.
M.L.
A photo that accompanied the article in the school magazine
Here's a grainy photo that accompanied the article in the school magazine. You can see one of the "VDU" (Visual Display Unit) terminals, and you can see a better picture of one of these in the photo of the Systime unit at the end of this post, but can you also spot the Superterm paper-based terminal (back right)?
A Superterm Data Communications Terminal
Moreover, furthest away from the camera, there's the original Texas 733 terminal, also paper-based. Here's a better picture of one, from the original brochure.
A Texas 733 terminal
Here's a picture of what the computer unit looked like - it's a photo (courtesy of Computing at Chilton, thank you) of a Systime unit, and the terminal on the desk is the same as those that we had in the computer room.
A Systime unit
]]>I was staring absentmindedly at a helper script that I'd written on last Friday's #HandsOnSAPDev live stream (replay here: Generating Enterprise Messaging artifact graphs with shell scripting and Graphviz - Part 1) which looks like this:
#!/usr/bin/env bash
declare fontname="Victor Mono"
declare fontsize="16"
dot \
-Tpng \
-Nfontname="$fontname" \
-Nfontsize="$fontsize" \
test.dot > test.png
(yes, I know we need to talk about the hashbang, but let's leave that for another time).
I like to make my scripts readable, so I often split commands over separate lines, as I've done here in the invocation of the dot
command, with its various switches and arguments.
I had always vaguely thought that "OK, well if I want to continue on the next line, I have to put a backslash (\
) at the end of the preceding line", like I've done here. So in that sense, I was considering the backslash as a sort of continuation character.
Bzzt. Wrong. Or at least not entirely accurate.
Yesterday the YouTube algorithm, which knows I'm currently geeking out on all things shell, suggested a short series of videos by Brian Will: Unix Terminals and Shells. In the second video in this playlist, he listed the characters in Bash that had special meaning:
# ' " \ $ ` * ~ ? < > ( ) ! | & ; space newline
Some may be more obvious or familiar than others. Note the last two in the list - space
and newline
. In particular, newline
normally denotes the end of a command (of course, there are other special characters that denote this too - notably ;
and &
). See Section 3.2.3 Lists of Commands in the Bash manual for details.
Brian goes on to explain the function of the backslash character in a quoting or escape capacity - to remove the special meaning of the immediately following character, so that it's treated as-is. So for example if you actually wanted a dollar sign, which normally has a special meaning if you use it, you'd use \$
.
Likewise, then for the newline character. If you want the meaning of newline to be cancelled, i.e. "please do not treat this point as the end of the command", then you need to use the backslash to quote it:
\newline
(of course, I'm representing an actual newline with newline
here).
I don't know about you, but I've always used "escape" as a verb here, rather than "quote", i.e. "to remove the special meaning of character x, you have to escape it with a \
". I think a more accurate way of saying it is "... you have to quote it with a \
". The cause is probably the fact that \
is known as the "escape character". The escape character is documented in Section 3.1.2.1 Escape Character of the Bash manual, which is quite short, and worth quoting, thus:
"A non-quoted backslash ā\ā is the Bash escape character. It preserves the literal value of the next character that follows, with the exception of newline. If a \newline pair appears, and the backslash itself is not quoted, the \newline is treated as a line continuation (that is, it is removed from the input stream and effectively ignored)".
This rounds out this post neatly for me, in that in fact, yes, the special meaning of newline
is removed if you quote it with a backslash, so you can continue on the next line; what's more, it's actually removed from the input stream. I mean, it's just whitespace anyway, but that's a curious and interesting detail.
Anyway, I'm now enlightened - I thought the backslash was doing something "special" here in the context of the script above (and many others like it), but no, in fact, it's just doing its normal job of removing the special meaning of the immediately following character, which to us is invisible.
]]>Spending a pleasant coffee on my day off today I looked at tackling another challenge in Exercism's bash track - Acronym. The requirement included ensuring that any generated acronym (I guess these might actually be initialisms, but that's a discussion for another time) was completely in uppercase, regardless of the source.
In my solution, I resorted to the usual use of tr
, like this:
tr '[:lower:]' '[:upper:]'
All good. I like to peruse others' solutions, to learn from how they might have tackled the same challenge, and I came across something that looked rather odd at first, as I'd never seen it before. It was a solution by TopKech and looks like this:
OUTPUT=$(echo "$1" | sed -e 's/$/ /' -e 's/\([^ \-]\)[^ \-]*[ \-]/\1/g' -e 's/^ *//')
echo ${OUTPUT^^}
What's that ^^
in the second line?
Turns out that it's a case modification operator, within a parameter substitution context. What's more, it is "relatively" new, in that it was introduced with version 4 of Bash. I say "relatively", as version 4 was introduced way back in 2009; but having been a macOS user for a while, I'd been stuck with version 3 due to Apple's issues with the GPL v3 licence (prompting them not to ship any version beyond 3, and even go so far as to make zsh
the default shell on newer versions of the OS).
Version 4 of Bash came with lots of wonderful stuff, including 4 separate case modification operators, that are illustrated in the Advanced Bash Scripting Guide - Chapter 37. Bash, versions 2, 3, and 4 and can be summarised thus:
|-|-|
|${var^}
|Make first char of var
value uppercase|
|${var^^}
|Make all chars of var
value uppercase|
|${var,}
|Make first char of var
value lowercase|
|${var,,}
|Make all chars of var
value lowercase|
I didn't know that, but I do now!
]]>I've just set up Exercism on this machine so I could download challenges in the Bash track and try to improve my Bash scripting fu. I spent a pleasant hour getting to know bats - the Bash Automated Testing System, which Exercism uses for the Bash track, and looking at one of the easy challenges on Hamming in relation to DNA sequences. My solution, in case you're interested, is here.
In implementing the solution, I had to compare DNA sequences and determine how many differences between them there were - a count of where letters differed in the same positions. For example, while there are no differences in the pair of sequences GATTACA and GATTACA, there are two differences in the pair GATTACA and GCTTAGA. As much by luck as anything else, I stumbled upon this construct:
${parameter:offset:length}
I'd seen this construct before and it rang a bell, I remember thinking it was like the MID$
function in many implementations of the BASIC language I'd used in my early days. Basically it's a way of reaching into a string and pulling out a section of it.
So for example, if we have str="Hello, World!"
then we can use this construct like this (note that all these variations are possible):
> echo ${str}
Hello, World!
> echo ${str:4}
o, World!
> echo ${str:4:5}
o, Wo
Shell parameter expansion
There's plenty more information on this in the Bash man page; perhaps most importantly we learn there what this is called - in what category this contruct lives. It's in the parameter expansion family.
We've seen parameter expansion before in this blog, specifically in Shell parameter expansion with :+ is useful, which looks at the :-
and :+
variants. But even if we combine this post with that one, we're only scratching the surface of what's possible; I'm looking forward to grabbing a cup of tea and reading through the entire section of the Bash man page on this topic soon.
Using expr
What's potentially confusing here is that there's more than one way to extract a portion of a string. Not only do we have this shell parameter expansion construct, but we have the executable expr
, which evaluates expressions. There are many expressions that can be evaluated (see the man page), one of which is substr
. This explains why, in the Manipulating Strings section of the Linux Documentation project, both approaches are documented.
So using expr
, the equivalent of the above example where the value "o, Wo" is pulled out of "Hello, World!" is this:
> expr substr "$str" 5 5
o, Wo
There are two things to note here with this expr
evaluation of substr
, given that the man page describes it as substr STRING POS LEN
:
the first number is not the offset but the position, which is why here we need the first value to be 5 (position in the string) whereas with the parameter expansion we needed the value 4 (offset from the start).
both numbers (POS and LEN) are required; if your extraction needs to be "from here to the end of the line" you'll either have to work out the length yourself, or use another expression with expr
, for example match
.
In context
Right now I'm wondering about the history of both approaches, and what we should be using, but that question is for another day. In the meantime, in case you want to see the use of this parameter expansion approach in my solution, here's the relevant section (noting that the first and second DNA sequence strings are in $1
and $2
respectively):
# Count the differences
declare diffcount=0
for i in $(seq 0 $(( ${\#1} -1 ))); do
[[ ! "${1:$i:1}" = "${2:$i:1}" ]] && (( diffcount++ ))
done
echo "$diffcount"
There's plenty to learn in this area, but right now it's time for me to make that cuppa.
]]>After working my way through the small ix
script in Mr Rob's dotfiles, writing three posts Using exec to jump, curl and multipart/form-data and Checking a command is available before use along the way, I've now turned my attention to the twitch
script which he uses during his live streams. I haven't gone very far when I light upon this section:
declare gold=$'\033[38;2;184;138;0m'
declare red=$'\033[38;2;255;0;0m'
...
So declare
is a keyword that I've seen before but never fully understood or embraced. Seems like this is a good time to fix that.
Declare is a builtin
To start off, declare
is a builtin, which means that rather than an external executable (such as echo
, or even [
), it's part of the Bash runtime itself, as we can see thus:
> strings $(which bash) | grep declare
declare -%s %s=%s
declare
declare [-aAfFgilnrtux] [-p] [name[=value] ...]
When used in a function, `declare' makes NAMEs local, as with the `local'
A synonym for `declare'. See `help declare'.
be any option accepted by `declare'.
declare -%s
(If you're curious about [
being an external executable, you might be interested in another post: The open square bracket [ is an executable.)
The typeset
synonym
First off, let's deal with the declare
vs typeset
question. Basically, typeset
does in the Korn shell (ksh) pretty much what declare
does in the Bash shell. And typeset
has been added to Bash as a synonym for declare
, to make it easier for developers to switch between the flavours. There are other synonyms relating to declare
, but we'll come to those in a bit.
Basics of declare
Next, let's deal with the question: "But why is declare used here at all?". Well, in this particular case it's not absolutely necessary. Strings and array variables don't actually need to be declared, so this would be fine, too:
gold=$'\033[38;2;184;138;0m'
red=$'\033[38;2;255;0;0m'
...
This would be a couple of simple assignments of values to (otherwise) previously undeclared variables. On the other hand, with the declare
variant, subtly different, we're declaring a couple of variables and also making assignments at the same time, which declare
permits us to do.
The local
synonym
Of course, the main point of declare
is to declare variables and state certain attributes that they are to have. We haven't seen an example of that yet, but before we do, there's another subtle difference between declare var=value
and simply var=value
. This is briefly covered in a paragraph of the help information (run help declare
in a Bash shell):
When used in a function, declare
makes NAMEs local, as with the local
command. The -g
option suppresses this behavior.
So local
is our next synonym for declare
, in the context a function definition. An example script foo
will help:
func1() {
local var1
var1="Apple"
echo "func1: $var1"
}
func2() {
declare var1
var1="Banana"
echo "func2: $var1"
}
func3() {
var1="Carrot"
echo "func3: $var1"
}
var1="Main"
echo "var1 is $var1"
func1
echo "var1 is $var1"
func2
echo "var1 is $var1"
func3
echo "var1 is $var1"
Let's look at what we get when this script is executed:
> bash ./foo
var1 is Main
func1: Apple
var1 is Main
func2: Banana
var1 is Main
func3: Carrot
var1 is Carrot
The thing to spot here is that because neither local
nor declare
were used for var1
in the definition of the func3
function, the assignment of the value Carrot
was not restricted to the scope of that function, and when back in the main part of the script, the value of var1
has the value that it was assigned within func3
, i.e. Carrot
, not Main
any more.
Options for declare
Of course, given the main purpose of declare
, it's worth briefly looking at more specific uses. There are various options, adequately covered by various sources including Advanced Bash-Scripting Guide - Chapter 9: Another Look at Variables, and so only summarised here:
|-|-|
|-r
|Read-only|
|-i
|Integer|
|-a
|Array|
|-A
|Associative array (i.e. a dictionary, or object)|
|-f
|Function|
|-x
|Exported|
|-g
|Global|
There are other options, but these are the most common, at least as far as I've found in my research. Others are covered in the help declare
output.
The readonly
synonym
The -r
option for declare
has a sort of synonym too, which is readonly
. However, there's a difference relating to scope; while declare -r
will use function-local scope (similar to how it was used in func1
and func2
earlier), readonly
will not respect that and simply use the global scope, even inside functions.
In other words, if you were to add another function definition to the above foo
script example, like this:
func4() {
readonly var1
var1="Damson"
echo "func4: $var1"
}
... then when func4
had been executed, the value of var1
in the main section of the script would then also be Damson
.
Using declare -r
here instead is the safer approach, in that the local function scope is respected. Note however that if we add -g
(denoting the "global" attribute) to this, i.e. use declare -r -g
or declare -rg
, then the effect would be the same as using readonly
.
The export
synonym and what -x
implies
There's one final synonym I found in this journey of discovery, and that's export
, which is the equivalent of declare -x
. It took me a few minutes to properly think about what this "available for export" attribute actually implies.
Like me, you've most probably used export
in your .bashrc
file, to set "global" variables when your shell session starts, to be available to you in that session and in executables that you invoke there. Usually these variable names will be in upper case by convention, denoting variables that are "environment" wide. In your shell, you can use env
to see what these are. Note that the list that env
produces includes variables automatically available to you in the shell too, such as HOME
and PATH
.
So what does declare -x
imply, in the context of a script that you might write and then invoke? It does not mean that once the script finishes, that variable will be available to you in the shell. As an example, consider this script bar
:
declare -x var2="Raining"
echo "var2 is $var2"
When we run this, look at what we get:
> echo "$var2" # nothing up my sleeve
> ./bar
var2 is Raining
> echo "$var2"
>
But what if we also had another script baz
:
echo "In baz: var2 is $var2"
and we invoked it from within the bar
script:
declare -x var2="Raining"
echo "var2 is $var2"
./baz
You can guess what the output will be:
> ./bar
var2 is Raining
In baz: var2 is Raining
And on returning to the shell prompt, we can double check that var2
doesn't have a value:
> echo "$var2"
>
The way I think about this in my mind is like a tree structure:
shell
|
+-- bar <- var2 declared with -x as 'exported'
|
+-- baz <- var2 available here too
Exporting descends, rather than ascends, so there's no way var2
could ever be made available like this in the shell
.
That's about it, I think. I hope this is useful; I have found it helpful to try and explain these concepts to you, as it helps me learn. In researching, I came across some content in Stack Overflow and Stack Exchange - so thanks to those folks who took the time to explain things there. You may want to reference them too:
]]>set -o errexit
at the start of my scripts to make them more robust.
There comes a time when you move from just hacking lines of shell script together into a file, to recognising that the file is now a script and that you want that script to run well, so you give it a little bit of help.
In a similar way to the -w
flag for Perl scripts, or even perhaps the strict mode turned on in JavaScript files with 'use strict'
, there are flags that you can use for Bash scripts. A few weeks ago I read Writing Robust Bash Shell Scripts by David Pashley, and it taught me about a couple of flags:
Short form | Long form | Description |
---|---|---|
set -e |
set -o errexit |
exit when a command fails |
set -u |
set -o nounset |
exit when an undeclared variable is used |
There are short and long forms of these flags, as you can see. I would use the short forms on the command line, but prefer the long forms in scripts, because they're more readable (although the language nerd in me sees 'noun set' before 'no unset' in the latter). The Google Shell Style Guide, to which I referred in Improving my shell scripting recently, also has something useful on using flags, in the Which Shell to Use section. It says that flags on the hashbang line (#!/bin/bash
) should be used sparingly - in other words, they should be set with set
on their own lines. The reason it gives, which makes sense, is that the script can then be run in the same way like this: bash scriptname
(the hashbang is redundant in this case, along with any flags set on that line).
I honed in on set -o errexit
as it seems to be a recommended standard and makes a lot of sense (although interestingly the Google Shell Style Guide makes no mention of it). This flag causes the script to be terminated if any statement returns a non-true value. As David put in his article, this "prevents errors snowballing into serious issues when they could have been caught earlier".
As I was looking for further information on set -o errexit
I came across another useful article Best Practices for Writing Bash Scripts by Kev van Zonneveld - definitely worth a read, especially for other flags that are available (xtrace
and pipefail
).
So, I'm putting set -o errexit
as one of the first lines in my Bash shell scripts. I notice that Mr Rob does the same (see his twitch
script as an example). You should, too.
shellcheck
and shfmt
tools to help me improve the quality and consistency of my shell scripts.
I'm doubling down on shell scripting, in particular Bash shell scripting. This is for many reasons, not least because I think that in the age of cloud and containers, shell environments are more important than ever. And what better shell than the Unix style shell; the design dates back decades but is still in my eyes one of the most wondrous things in tech even today, with its beautiful simplicity and its simple beauty.
Style Guide
While watching a live stream replay by Mr Rob, specifically Google Shell Scripting Guide, Yes, Yes, 1000 Times Yes!, I came across the Google Shell Style Guide and it's succinct enough to digest in a single sitting, and well written enough to comprehend in that time, too.
I've decided to use this style guide as a general reference for my scripts and plan to implement changes to some of my existing scripts over time.
Shellcheck
I discovered the shellcheck
shell script analysis tool recently and my goodness me has it made a significant impact on not only the quality of what I write, but also on my understanding of Bash shell syntax! It's available as an online tool, but far more importantly as a command line tool that will highlight issues with your shell code. A linter, basically.
Moreover, it has a rich set of reference material in the wiki, including definitive pages for each of the errors it will emit. Here's an example: SC1019 is the error code for "Expected this to be an argument to the unary condition" and there's a reference page for it here: SC1019.
I use Vim as my primary editing environment and use the Asynchronous Linting Engine (ALE) as a key plugin. This means, that without me lifting a finger, shellcheck
will be used asynchronously, live while I'm editing, to show me issues.
If you're writing shell scripts, get shellcheck
installed and wired up to your editor now.
shfmt
My son Joseph used to write a lot of Go, and I was fascinated by the philosophy of what the gofmt
formatting tool represented. Go programmers all expected code to be formatted the same way via this tool, and it's natural for them to have their code (re)formatted when they save it in the editor. I know that this is anathema to some programmers, which is why it caught my eye.
There are formatters for other languages that work this way now (and I'm sure there were before, too) such as rustfmt
(used by Mr Rob, which is what gave me the idea) and there's a version for shell scripts called shfmt
, described as "a shell parser, formatter and interpreter".
Having experimented with the shfmt
options, I ended up choosing a few that would help me stay close to the style guide:
Option | Meaning |
---|---|
-i 2 |
indent with two spaces |
-bn |
binary ops like && and | may start a line |
-ci |
switch cases will be indented |
-sr |
redirect operators will be followed by a space |
I added some new configuration to tell Vim to use this shfmt
tool with these options, to automatically format any shell source on save. This means that I can get my script content automatically formatted without thinking about it, in the same way Go programmers enjoy.
This is what that configuration addition looks like right now:
fun! s:FormatBashScripts()
if getline(1) =~# '^#!.*bash' && executable('shfmt')
%!shfmt -i 2 -bn -ci -sr -
endif
endfun
autocmd BufWritePre * call s:FormatBashScripts()
The reference to getline
is to check that the shebang denotes a Bash shell script and the reference to executable
prevents errors occurring if I'm on a machine where shfmt
is not available. The key part is this: %!shfmt ...
which passes the entire buffer contents through the invocation of shfmt
as if it were a filter, replacing the contents with whatever shfmt
outputs.
I guess it almost goes without saying that the significance of how this works -- using shfmt
as a filter to pass the content through, via STDIN and STDOUT, following one of the key Unix shell philosophies -- is not lost on me.
And remember folks, #TheFutureIsTerminal!
]]>ix
script, I discovered something about curl
that I hadn't known about.
The script's key line is this:
url=$(curl -s -F 'f:1=<-' http://ix.io)
My gaze was immediately drawn to this bit: -F 'f:1=<-'
. Part of this initially cryptic incantation is actually down to the instructions from the ix.io website itself, in the TL;DR section.
Checking the curl
documentation for the -F
option, I discover that this venerable command line HTTP client can send multipart/form-data payloads, with files or file contents. So, breaking this incantation down, we have:
Part | Meaning |
---|---|
-F |
send a POST with multipart/form-data content |
f:1 |
the name of the form field that the website is expecting |
< |
send as file contents (rather than an actual file) |
- |
read the contents from STDIN |
And in the context of where this is being executed, STDIN is the ix
script's STDIN, in other words, whatever is piped into ix
when it's invoked.
In response to the form being POSTed, the ix.io website returns the newly minted unique pastebin URL that was created, and this is saved into the url
variable in the script, to be shared in various ways.
Lovely!
]]>ix
script that I wanted to pick out. It's not earth shattering but still useful to have seen.
At the end of the script, the URL generated from the newly created ix.io pastebin is put into the X buffer (so that it can be pasted into other X applications). This is done via the xclip
command, but xclip
is not installed everywhere, so the ix
script checks that it is available before trying to use it:
which xclip >/dev/null || exit 0
echo "$url" | xclip
This is a common pattern.
Because the use of xclip
here is right at the end of the script (by design, most likely) it's possible to abort (|| exit 0
) if xclip
isn't there. I guess an alternative, if it was necessary to run it mid-script, would be something like this:
which xclip >/dev/null && echo "$url" | xclip
Anyway, worth knowing and having seen it, right?
]]>In one of his streams I saw him use ix
and thereby discovered ix.io - a simple pastebin. He uses his ix
script to share code and other content, either from the command line or from within Vim directly. It's only 14 lines including comments, but I've learned stuff from it already.
If ix
is invoked with an argument, it's treated as the unique identifier for a specific pastebin, and that pastebin is retrieved, such as 2pgP (which is another of his scripts with lots to learn from - twitch
).
The part of ix
that handles this is simply:
if [ -n "$1" ]; then
exec curl -s "ix.io/$1"
fi
Basically in this mode, there's no point in processing the rest of the script (beyond the small section you see here), so the handling of the input should finish when the pastebin is retrieved.
Until now, I would have written it like this:
if [ -n "$1" ]; then
curl -s "ix.io/$1"
exit
fi
But that's simply unnecessary, and in fact arguably less efficient too. The Bash man page mentions, for exec
, this fact: "If command is specified, it replaces the shell. No new process is created.". In other words, in this if ... fi
, the curl
command replaces the script's execution, rather than being executed as a sub process.
Sometimes there's a beauty in the smallest things.
]]>The range of subjects is wide, and breadth of discussions wider, and it's very terminal centric, which I like.
I've been inspired to level up my shell scripting game, not least by watching what he does and reading what he's written. To that end I've created a small new blog where I'll add posts as and when I get the chance. The blog is 'Autodidactics' and is a play on a phrase that Rob used in a reply to me on Twitter the other day.
I've made a small start with Using exec to jump, but that's also the point. The things I see, and that grab my attention and help me improve my knowledge, are small. One of the keys to continuous improvement and learning is adding to one's knowledge base one small gem at a time.
]]>:+
for expanding optional values
I've been increasing my Bash scripting activities recently, not least in relation to some live stream episodes relating to Enterprise Messaging and have used some of the shell parameter expansion facilities described in section 3.5.3 Shell Parameter Expansion of the GNU Bash manual. In particular, I've been using what I call the "default value" (:-
) form:
${parameter:-word}
This form has this description: "If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted" and is very useful for setting default value for parameters that are expected at invocation time, for example.
On a walk yesterday I was listening to an episode of a series of podcasts I'd discovered that very day, on Hacker Public Radio. It's an in-depth series on Bash scripting, and has quite a few episodes, some very recent, and the early ones dating back to 2010. The episode I listened to was hpr1648 :: Bash parameter manipulation by Dave Morriss, and I enjoyed it very much.
One of the things Dave mentioned was this form of expansion (:+
), related to the one above, but sort of the opposite:
${parameter:+word}
The form has this description: "If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted." Dave found this slightly odd, and commented that he couldn't quite think of a use case for this form. I couldn't, either.
Later that same day, I came across a live streamer on Twitch - Rob, aka rwxrob - who has some excellent content, also on YouTube. Watching the beginning of one of his live stream recordings, Google Shell Scripting Guide, Yes, Yes, 1000 Times Yes!, he introduces a Bash shell scripting resource from Google - the Shell Style Guide, which he goes through in detail in the live stream.
Noting how great that style guide looked, I start to read through it immediately. And what do I see, in the section on Quoting? An example of the :+
shell expansion form (in an illustration of something else entirely), which made complete sense and explained its real purpose! I couldn't believe it - discovering multiple complementary sources of information on Bash shell scripting on the same day? Goodness me.
Here's the example of that :+
shell expansion form taken directly from that section in the style guide:
git send-email --to "${reviewers}" ${ccs:+"--cc" "${ccs}"}
Look at that beautiful thing!
${ccs:+"--cc" "${ccs}"}
If there is a value in the ccs
variable, use it, but in the expanded context of it being a value to the --cc
switch used with the git
command. The value (most likely one or more email addresses) would be of no use on its own, but put with --cc
it makes complete sense. And the icing on the cake is that the :+
form substitutes nothing if the variable is null or unset, meaning there's no carbon-copying requested if there are no emails listed in the ccs
variable.
Now that made my day. I was originally with Dave on not being able to think of a reason for the :+
form, and then whammo, there's a perfect example right there. Thanks Dave, thanks Rob, and thanks to the Googlers who wrote the style guide!
I saw a tweet from Simon Willison earlier this week pointing to Matt Webb's 15 rules for blogging, and my current streak. I decided that I would also like to try to write more, and one of the things getting in my way was the slight friction in starting a new post. I use GitHub Pages and Jekyll behind the scenes, and my posts are in Markdown, one file per post (I like the simplicity of this, it reminds me of Rael Dornfest's Blosxom).
So running the risk of being accused of a small amount of yak shaving, I wrote a very basic script (with Simon's "perfect is the enemy of shipped" in my head) that I could use to start a new post quickly and pushed it to my dotfiles repo.
The script is newpost and is very basic, having taken me less than 10 mins to write. That's sort of the point. I may refine it as I go on, in fact I probably will; not least because the function that generates a filename from a post title is very basic indeed, but also because I would like to perhaps create a new tmux session for editing and running Jekyll locally for test rendering. But it's good enough for now, and in fact I kicked off this post using it, by typing:
> newpost reducing writing friction
whereupon I landed in Vim with this in the file, all ready:
---
layout: post
title: Reducing writing friction
---
That'll do for now!
Incidentally, I'm already on a small path to writing more, having adopted Simon's Today I Learned (TIL) mini-post approach. I've written a few TIL posts on this blog recently and I feel very freed by the constraints.
Update 2020-10-08 I've moved these posts to a new blog autodidactics - see A new learning source for shell scripting for the background.
]]>I run my own DNS locally via Pi-hole, but I also like to have SSH configuration to specify various options depending on the hosts I'm remotely connecting to. Usually it's the username to use, sometimes it's whether I want to do X11 forwarding, and so on.
My work machines have very odd and hard to remember hostnames. I could use the SSH configuration feature (via the .ssh/config
file) to get around this, like this:
Host easy
HostName hard-to-remember
User username-to-use
Then I could just remotely connect to that hard-to-remember
host machine like this:
ssh easy
(As a bonus, having securely shared public key credentials with ssh-copy-id
beforehand makes this process even smoother.)
But I don't want to expose those hard-to-remember
work machine hostnames in the configuration.
I learned today about the HOSTALIASES
environment variable which is supported by glibc
's resolver function gethostbyname()
. Pointing HOSTALIASES
to a file of "aliasname realname" pairs of hostnames means that commands that use gethostbyname()
to resolve hostnames can be given alias hostnames instead of real hostnames. The ssh
command uses that function.
This is what I did:
First, I created a file host.aliases
(making sure not to check this file into a git repo, by adding the file name to .gitignore
) with content like this:
oldmbp realsecrethostname1
newmbp anothersecretworkhostname
Then, in my .bashrc
, I set the HOSTALIASES
environment variable to point to this file:
export HOSTALIASES="$HOME/.dotfiles/host.aliases"
Finally, I modified the contents of my .ssh/config
file to use wildcards matching the aliases:
Host *mbp
User username-to-use
That way I can use easy and memorable hostnames when connecting to my work machines (e.g. ssh oldmbp
) without exposing the hostnames in any public configuration.
[
symbol is not syntax, it's an executable
In my live stream episode this morning I added to a function so that it looked like this:
getservicekey () {
local instance=${1}
local servicekey=${2}
local file
file="${instance}-${servicekey}.json"
if [ -r "${file}" ]; then
cat "${file}"
else
cf service-key "${instance}" "${servicekey}" | sed '1,2d'
fi
}
Looking at that condition if [ -r "${file}" ]
one would think that the [ ... ]
part is just some shell syntax to glue things together (to contain the expression under evaluation), part of the family of symbols including double quotes, semicolons and others.
But no. In a wonderfully quirky way, [
is actually a command, an executable. I remember seeing an odd character in my /bin/
directory a while back:
ā¶ ls /bin
[ dash expr ln pwd sync
bash date hostname ls rm tcsh
cat dd kill mkdir rmdir test
chmod df ksh mv sh unlink
cp echo launchctl pax sleep wait4path
csh ed link ps stty zsh
Check out that [
entry!
Turns out that [
is a synonym for test
. You can ask for the manual page for [
and you get something that covers [
and test
:
ā¶ man [
NAME
test, [ -- condition evaluation utility
SYNOPSIS
test expression
[ expression ]
DESCRIPTION
The test utility evaluates the expression and, if it evaluates to true, returns a zero
(true) exit status; otherwise it returns 1 (false). If there is no expression, test also
returns 1 (false).
...
Indeed, if you compare the two files /bin/[
and /bin/test
, they're the same.
The if
statement above can be rewritten with the test
synonym like this: if test -r "${file}"
, but now I know that [
is an actual executable, I'll take great delight in using it more.
Post Script: As well as being an executable, [
is also built in to many shells these days, so incurs no performance penalty on use that an external command would otherwise do.
I'd half wondered for a while why many of the directories in the root filesystem (/
) of a Linux installation are also to be found in /usr
. Recent convention implies that 'usr' stands for "User System Resources" but this is really only a sort of backronym.
There are executable files and libraries in /bin/
and /lib/
, for example, but also in /usr/bin/
and /usr/lib/
. Why? Regardless of what people might tell you today, the answer lies in the history of Unix (upon which Linux is based, of course). Created in the late 1960s / early 1970s on Digital PDP machines with limited disk space, the original Unix operating system binaries were placed on the root filesystem (mounted at /
), with e.g. executables, libraries and configuration files split across /bin/
, /lib/
and /etc/
directories respectively.
Separate from the root filesystem was another filesystem (on a separate disk) for users' home directories. This was /usr/
- yes, short for "user(s)".
As the Unix system grew, the space on the root filesystem disk eventually ran out, and a decision was made to move some of the executable and library content over to the other disk that was mounted on /usr/
. It made sense to replicate the names of the directories on that other disk, names which therefore became /usr/bin/
and /usr/lib/
because of the relation to where that filesystem was mounted.
Over time the place for the users' home directories moved from /usr/
to /home/
, meaning /usr/
content eventually lost any semblance of user-specific focus.
A bonus, related thing I learned fairly recently is that the "s" in sbin
(which also can be found in both the filesystems mounted on root (/
) and /usr/
) stands for "system" denoting that the content is only executable by the root user.
replace
operation
I was pondering different approaches to solving the Codewars kata Simple string reversal, and having submitted my own, I started to browse other solutions. One that caught my eye was this, from users Bubbler and Tellurian:
function solve(str) {
let arr = [...str].filter(x => x != ' ')
return str.replace(/\S/g, _ => arr.pop())
}
If you look at the MDN page for String.prototype.replace() the syntax is given thus:
const newStr = str.replace(regexp|substr, newSubstr|function)
A function! I probably had come across this before but had forgotten. The beauty of the solution above lies in this possibility; while the arr.pop()
mutates the arr
, it does it in such a beguiling way I don't have any issue enjoying the entire replace
call. Given that the regular expression g
modifier is used, the function supplied is called N times, each time supplying a (single character) value from the arr
array.
Absolutely lovely.
]]>continue-on-error: true
in a GitHub Actions job spec to prevent failures from being flagged when a job step fails.
For me, high (non-zero) return codes don't always necessarily denote failure; sometimes I want to use a high return code to control step execution (see TIL: git diff can emit different exit codes). But this means that the entire workflow run is marked as failed in a GitHub Action context. To prevent this, you can use continue-on-error
at the job level to prevent a workflow from failing when a job fails.
I added this to my build workflow and it works nicely. See for example action execution 178511479 - i.e. even when there was a step that ended with a high return code (deliberately, to signify no changes), the entire execution was still marked as a success:
(Hat tip to Tom Jung for this).
]]>!
operator to control GitHub Actions job step execution based on git changes.
When defining a job step in GitHub Actions, you can specify a condition that must be met for a job step to run (in a broadly similar way to how things were in Job Control Language). In my profile repo's builder workflow, I wanted only to proceed with a git commit step if there were actual changes that had been made in a previous step.
Supplying the --exit-code
option to git diff makes it emit an exit code of 1 if there are differences, and 0 if not. This option is implicit in --quiet
, too.
As exit code 1 represents a fail in job step conditionals, the output of git diff can be reversed with the POSIX !
operator. So in this pair of steps, the second one will only run if there are no differences detected in the first one:
- name: Check for changes (fail if none)
run: |
! git diff --quiet
- name: Commit changes if required
if: ${{ success() }}
run: |
git config --global user.email "qmacro-bot@example.com"
git config --global user.name "qmacro bot"
git add README.md
git commit -m 'update README' || exit 0
git push
]]>I had to write the
! git diff --quiet
within a YAML multiline expression (introduced with|
) as the GitHub Actions runner didn't like it on the same line, i.e.run: ! git diff --quiet
.
mta.yaml
files you can use the service-name
parameter to point to an existing service instance with a different name than the resource.
When the contents of a multi-target application file file have been created or modified automatically for you, and there are references to generated service instance names, you don't have to globally replace those names to match whatever service instances you may already have, but instead you can add the service-name
parameter in the resource definition.
For example, when adding a new Workflow module to an existing (mta.yaml
-based) project, the generator will add something like this:
modules:
- name: OrderProcess
type: com.sap.application.content
path: OrderProcess
requires:
- name: workflow_mta
parameters:
content-target: true
resources:
- name: workflow_mta
parameters:
service-plan: standard
service: workflow
type: org.cloudfoundry.managed-service
It's often the case that you already have a Workflow service instance, but not with the generated name workflow_mta
. So after modifying the resource type
to be org.cloudfoundry.existing-service
, you can save some time and avoid changing all occurrences of workflow_mta
to match your actual instance name (e.g. my-workflow-instance
). Instead, use the service-name
parameter, like this:
modules:
- name: OrderProcess
type: com.sap.application.content
path: OrderProcess
requires:
- name: workflow_mta
parameters:
content-target: true
resources:
- name: workflow_mta
parameters:
service-name: my-workflow-instance
type: org.cloudfoundry.existing-service
]]>I learned this a while ago but promptly forgot about it until now, when I needed it again.
I've had my La Pavoni PL lever espresso coffee machine for just over a year, and I'm extremely happy with it. Recently I ordered some wooden replacement handles for it from a vendor on Etsy and when they finally arrived I set about replacing the factory standard bakelite handles with the wooden ones.
Replacing them was easy except for one item - the knob on the steam valve shaft. The existing one was held in place by a metal split pin and seems to have a reputation of being hard to remove. I couldn't figure out the best way either; while I'd learned how to remove the shaft itself from this video on YouTube: La Pavoni Lever Machines: How to Remove the Steam Valve Shaft I couldn't quite figure out how to remove the knob itself.
So I asked on the r/espresso subreddit and the user Dr_Procrastinator gave some very helpful advice in a reply.
Having obtained a small and inexpensive set of nail punches from a local hardware / DIY store (B&Q Ashton, Ā£5.25) earlier this week, I set about the task this morning, and thought I'd share some photos as it might help someone else.
First I removed the shaft, by simultaneously unscrewing it using the knob, and unscrewing the silver threaded nut with a spanner.
I just kept on unscrewing the shaft until it came completely loose and I could pull it out with the last few turns; there was a bit of resistance (from the gasket, I guess) but not much.
This is what it looks like once removed:
I needed to hammer the split pin out through the hole in the knob, so I used a folded towel to support the shaft, and laid the knob within / on top of the thumb-hole of a heavy-duty chopping board; in this photo you can see the hole marked with the arrow. This gave enough support and stability for the next step, and it also afforded enough gap underneath for the pin to start coming out of the bottom (it wasn't a worry though as the pin came through very slowly with a lot of effort!).
While the shaft and knob were supported, I could then use the appropriately sized punch with a small hammer to gently but firmly tap down on the split pin, driving it out slowly.
It took a while - I was going gently and it took me a few minutes. Once the pin was sticking out enough through the end, I pulled it out with a pair of pliers (it was quite a struggle).
While tapping the pin punch into the hole, the rim cracked a little and a bit of bakelite flaked off, but I think that is not unusual in such conditions.
Here's the final result:
Ironically, all this effort has been somewhat in vain, as the new wooden knob has a tiny hex nut that you are supposed to screw round and down into the shaft to hold it firm. But while I had plenty of Allen keys around the house, not one of them fitted snugly and I couldn't get a decent torque to turn the screw into the shaft. This meant that this securing mechanism - that relies on sheer force into the metal shaft - was useless.
I guess once I find an Allen key that fits, I can try to screw it in tight enough for the knob to hold and not slide around the shaft when I'm using it to open and close the steam valve.
Until then, I've gone back to the bakelite knob ... with the pin pushed back through, but not all the way so I can get it out more easily next time.
]]>My interest in Raspberry Pis has increased over the last few months, and I've taken delivery of a couple of Pi Zero W models and another Pi 4, all from The Pi Hut, which I can heartily recommend. Using the Pis more often, I wanted to connect them to some remote storage, specifically my old but still relevant network storage device - an Airport Time Capsule from Apple - and also be able to seamlessly read and write files on my Google Drive. This short post documents what I did, so I can refer back to it if I need to do it again. Perhaps it might be useful for you too.
Here's the relevant section from a tree -L 2
in my home directory:
.
āāā mnt
Ā Ā āāā gdrive
Ā Ā āāā timecapsule
This was pretty straightforward and involves adding a new line in /etc/fstab
to represent the mount. This is the line:
//timecapsule/Data /home/pi/mnt/timecapsule cifs vers=1.0,password=sekrit,rw,uid=1000,iocharset=utf8,sec=ntlm 0 0
Broken down bit by bit, we have:
//timecapsule/Data
: The mount device, which is the Data share on the Time Capsule itself, which is identified here as the hostname timecapsule
. I could have used an IP address but I'm happily running a local DNS setup on this homelab (on one of the Pi Zero W devices) using Pi-hole/home/pi/mnt/timecapsule
: The mount point, where I want the Data share to be mountedcifs
: The file system type; Common Internet File System (CIFS) is a dialect of the more Windows specific Server Message Block (SMB) network protocolvers=1.0,password=sekrit,rw,uid=1000,iocharset=utf8,sec=ntlm
: There a few options specified here, such as the protocol version number, that the mount should be read-write, and so on (it seems as though the security mode 'ntlm', which was the default for a while, now must be specified explicitly)0
: Dump (disabled), i.e. no backing up of this partition0
: Boot time fsck (disabled), i.e. no file system check at boot time for this file systemA short note on the password (which isn't actually 'sekrit', obviously) - via the airport utility on my macOS device, I connected to the Time Capsule and set up the security for the disks with a "disk password", i.e. there is no user. This seemed simpler and good enough for what I need.
And that's it. After adding the line, a mount -a
(as root) did the trick, and of course the mount is performed on (re)boot too.
This was a bit more involved, but still worked first time. It involves the use of rclone, which is described as "a command line program to manage files on cloud storage", and support for Google Drive is included.
Basically, I followed this excellent tutorial from Artur Klauser - Mounting Google Drive on Raspberry Pi, so I won't repeat all the details; instead, I'll list the commands and activities I went through here.
I installed rclone
from the standard repositories, and got version 1.45; in other words, I didn't bother with trying to get the latest through a wget
pull of something newer - I think I'm happy with 1.45 (and it's worked well so far for me).
sudo apt install rclone
In order to use my own access configuration, I need a client ID and client secret pair for OAuth based authentication to the Google Drive API, and so I needed to get those from Google. I set up a fresh project "homelab-rclone", enabled the Google Drive API, and generated some OAuth 2.0 client credentials.
After generating the credentials, I went back to the command line and fired up rclone config
, following the guide in Artur's post mentioned earlier. Next, a simple test (rclone ls --max-depth 1 gdrive:
) showed that I could indeed see the contents of my Google Drive. The configuration procedure caused a file rclone.conf
to be created in ~/.config/rclone/
too, this is what the contents look like (I've elided the credential details of course):
[gdrive]
type = drive
client_id = 693105092413-sv17[...].apps.googleusercontent.com
client_secret = dQwG[...]
scope = drive
token = {"access_token":"ya29[...]","token_type":"Bearer","refresh_token":"1ae03[...]","expiry":"2020-06-07T16:58:38.185331858+01:00"}
Note that the permissions for this config file are appropriately set to 0600 (read-write for the owner, i.e. me, only).
Using Artur's instructions, I set up a user mode service to execute rclone
, including the use of lingering. Here, briefly, are the commands I used:
Create a new directory for this new user mode service:
mkdir -p ~/.config/systemd/user/
Add the [Unit]
and [Service]
entries as directed in the blog post, with a little modification as some of the options to rclone
did't work for me:
cat <<EOF > ~/.config/systemd/user/rclone@.service
[Unit]
Description=rclone: Remote FUSE filesystem for cloud storage config %i
Documentation=man:rclone(1)
[Service]
Type=notify
ExecStartPre=/bin/mkdir -p %h/mnt/%i
ExecStart= \
/usr/bin/rclone mount \
--fast-list \
%i: %h/mnt/%i
[Install]
WantedBy=default.target
EOF
Enable & start the service:
systemctl --user enable rclone@gdrive
systemctl --user start rclone@gdrive
Set up lingering:
loginctl enable-linger $USER
And that's it. After rebooting (to test), I can see the contents of my Google Drive at ~/mnt/gdrive/
. Success!
This is a post in the "Brambleweeny Cluster Experiments" series of blog posts, which accompanies the YouTube live stream recording playlist of the same name. The video linked here is the one that accompanies this blog post.
Previous post in this series: Finding the Pis on the network
At the end of the previous post, we'd identified the MAC and current IP addresses of the Pis on the network. This information found its way into a couple of files used in a process that follows the general flow described in Jeff Geerling's Raspberry Pi Networking Setup.
First, we have the inventory
file defining the current ("as-is") IP addresses of the Pis:
[brambleweeny]
192.168.86.47
192.168.86.15
192.168.86.158
192.168.86.125
[brambleweeny:vars]
ansible_ssh_user=pi
Note that we've also got the definition of the default user pi
also in there.
Then, we also have the vars.yml
file which is used by the main.yml
Ansible script to set things up. While we saw the contents in the previous post, it's worth looking at them again here:
---
# Mapping of what hardware MAC addresses should be configured with specific IPs.
mac_address_mapping:
"dc:a6:32:60:60:95":
name: brambleweeny1.lan
ip: "192.168.86.12"
"dc:a6:32:60:60:77":
name: brambleweeny2.lan
ip: "192.168.86.13"
"dc:a6:32:60:60:44":
name: brambleweeny3.lan
ip: "192.168.86.14"
"dc:a6:32:60:60:e3":
name: brambleweeny4.lan
ip: "192.168.86.15"
# Nameservers to use in resolv.conf.
dns_nameservers:
- "192.168.86.5"
This is the "to-be" state of the Pis, via configuration of specific hostnames and IP addresses, as well as what to use for domain name resolution, for each of the Pis that are to be identified by their MAC addresses. More explicitly, I want to move from dynamically allocated IP addresses (which are currently 47, 15, 158 and 125) to statically allocated IP addresses 12, 13, 14 and 15.
Running the Ansible main.yml
playbook as it stands right now presents us with a problem:
-> ansible-playbook -i inventory main.yml
PLAY [brambleweeny] ***
TASK [Gathering Facts] ***
The authenticity of host '192.168.86.47 (192.168.86.47)' can't be established.
ECDSA key fingerprint is SHA256:AJ5628fGhewiqdu/V2+B1LkR2HKGa+nRcwjYiiTGqWg.
Are you sure you want to continue connecting (yes/no)?
The authenticity of host '192.168.86.15 (192.168.86.15)' can't be established.
ECDSA key fingerprint is SHA256:sn2otbKVAa9Jsj+i3W0poIK731+pBP+ivbUrATJGVQk.
Are you sure you want to continue connecting (yes/no)?
The authenticity of host '192.168.86.158 (192.168.86.158)' can't be established.
ECDSA key fingerprint is SHA256:jFgPSwjEQsCSUx+nJcZ6ub9EhoGC1I1vSX5uSvVc1YE.
Are you sure you want to continue connecting (yes/no)?
The authenticity of host '192.168.86.125 (192.168.86.125)' can't be established.
ECDSA key fingerprint is SHA256:Tl3t427yXmbPIXjgBNBDHtNuw+MQUS132xhX6DCgo9E.
Are you sure you want to continue connecting (yes/no)?
We've never connected to these Pis before now, so ssh
, which is at the heart of Ansible's connection to them, will appropriately complain that it doesn't recognise them. This "complaint" comes about from ssh
's default approach to checking the keys of remote hosts, which is what we normally want (i.e. be strict!).
But for this particular operation we need to relax this approach, and for that we can use the StrictHostKeyChecking
option, which can either be set in the ssh
config file (~/.ssh/config
at a user level) or on the command line.
Here's the difference between trying to ssh
to one of the Pis without and then with the option turned off:
-> ssh pi@192.168.86.47
The authenticity of host '192.168.86.47 (192.168.86.47)' can't be established.
ECDSA key fingerprint is SHA256:AJ5628fGhewiqdu/V2+B1LkR2HKGa+nRcwjYiiTGqWg.
Are you sure you want to continue connecting (yes/no)?
Host key verification failed.
-> ssh -o StrictHostKeyChecking=no pi@192.168.86.47
Warning: Permanently added '192.168.86.47' (ECDSA) to the list of known hosts.
pi@192.168.86.47's password:
Note that in this second example, even before the password has been entered, the key for this remote Pi has now already been added to ~/.ssh/known_hosts
.
Ansible makes it easy for us to add ssh
options to the inventory
file, via the ansible_ssh_common_args
variable, which we do, at the end of the file, like this:
[brambleweeny:vars]
ansible_ssh_user=pi
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
Trying the playbook again, we don't get a problem with the inability of ssh
to authenticate the Pi hosts' keys. Great! But this just reveals the next problem, which again we can learn from:
-> ansible-playbook -i inventory main.yml
PLAY [brambleweeny] ***
TASK [Gathering Facts] ***
fatal: [192.168.86.47]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
Warning: Permanently added '192.168.86.47' (ECDSA) to the list of known hosts.\r\n
pi@192.168.86.47: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.15]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
Warning: Permanently added '192.168.86.15' (ECDSA) to the list of known hosts.\r\n
pi@192.168.86.15: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.158]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
Warning: Permanently added '192.168.86.158' (ECDSA) to the list of known hosts.\r\n
pi@192.168.86.158: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.125]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
Warning: Permanently added '192.168.86.125' (ECDSA) to the list of known hosts.\r\n
pi@192.168.86.125: Permission denied (publickey,password).", "unreachable": true}
to retry, use: --limit @/home/pi/raspberry-pi-dramble/setup/networking/main.retry
PLAY RECAP ***
192.168.86.125 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.15 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.158 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.47 : ok=0 changed=0 unreachable=1 failed=0
Notice that the -o StrictHostKeyChecking=no
did what we wanted it to do, as we can see the following message for each host in the output: "Warning: Permanently added '192.168.86.n' (ECDSA) to the list of known hosts".
So we've got ssh
to not refuse to connect because it doesn't initially recognise the hosts, but now we're getting a "permission denied" issue.
Of course, we're getting a "permission denied" issue because the remote Pis don't have the public key of the user of my current host (i.e. ~/.ssh/id_rsa.pub
) for public key based authentication, and we haven't supplied a password either (which for each of the freshly booted Pis, is 'raspberry' for the 'pi' user).
A passwordless based remote access flow is ideal, so this is something we should address now. We need somehow to get my public key across to each of the Pis, in the right place i.e. in the remote user's ~/.ssh/authorized_keys
file. (If you've not used public key based ssh
access before, why not?)
There's a specific Ansible module for this - the authorized_key
module, and we can use it in a short playbook like this, which we'll call set_ssh_key.yml
:
---
- hosts: brambleweeny
tasks:
- name: Set authorized key from file
authorized_key:
user: pi
state: present
key: "{{ lookup('file', '/home/pi/.ssh/id_rsa.pub') }}"
But of course we can't just run this, as we're still unable to connect, for the same reason:
-> ansible-playbook -i inventory set_ssh_key.yml
PLAY [brambleweeny] ***
TASK [Gathering Facts] ***
fatal: [192.168.86.47]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
pi@192.168.86.47: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.15]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
pi@192.168.86.15: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.158]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
pi@192.168.86.158: Permission denied (publickey,password).", "unreachable": true}
fatal: [192.168.86.125]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh:
pi@192.168.86.125: Permission denied (publickey,password).", "unreachable": true}
to retry, use: --limit @/home/pi/raspberry-pi-dramble/setup/networking/set_ssh_key.retry
PLAY RECAP ***
192.168.86.125 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.15 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.158 : ok=0 changed=0 unreachable=1 failed=0
192.168.86.47 : ok=0 changed=0 unreachable=1 failed=0
So we have to authenticate a different way - with the 'raspberry' password (remember, we're already supplying Ansible with the user via the ansible_ssh_user
variable in the inventory
file). The -k
option for ansible-playbook
tells it to ask for a connection password, which it will then use on our behalf when connecting to each host.
It's worth spending a couple of minutes understanding how this actually operates. It uses the sshpass
command, which is therefore required (I didn't have this and had to install it with sudo apt install sshpass
). sshpass
is described by its man page as a "noninteractive ssh password provider". Most of the time when we run ssh
it's in "keyboard interactive" mode, which means that it can ask the user for a password if required. The man page states that that ssh
"uses direct TTY access to make sure that the password is indeed issued by an interactive keyboard user", and that, fascinatingly, sshpass
runs ssh
in a dedicated TTY to trick ssh
into thinking it is indeed getting the password from an interactive user, when in fact it's not.
We can see this in action with a simple test:
-> sshpass -p 'raspberry' ssh pi@192.168.86.47
Linux raspberrypi 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l
[...]
pi@raspberrypi:~ $
Anyway, let's use the -k
option with ansible-playbook
to make use of this sshpass
utility; Ansible will first ask us for the password and then use sshpass
to pass it on to each of the ssh
connections it makes:
-> ansible-playbook -k -i inventory set_ssh_key.yml
SSH password: *********
PLAY [brambleweeny] ***
TASK [Gathering Facts] ***
ok: [192.168.86.15]
ok: [192.168.86.125]
ok: [192.168.86.47]
ok: [192.168.86.158]
TASK [Set authorized key from file] ***
changed: [192.168.86.158]
changed: [192.168.86.47]
changed: [192.168.86.125]
changed: [192.168.86.15]
PLAY RECAP ***
192.168.86.125 : ok=2 changed=1 unreachable=0 failed=0
192.168.86.15 : ok=2 changed=1 unreachable=0 failed=0
192.168.86.158 : ok=2 changed=1 unreachable=0 failed=0
192.168.86.47 : ok=2 changed=1 unreachable=0 failed=0
Success! From this point onwards, we can use ssh
to connect to each of the Pis, but via our public key, rather than a password:
-> ssh pi@192.168.86.47
Linux raspberrypi 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l
[...]
pi@raspberrypi:~ $
At this point I can retry main.yml
playbook, knowing that Ansible will be able to successfully connect to each of the Pis, using the public key we've transferred, and also using the default user defined in the ansible_ssh_user
variable in the inventory
file:
-> ansible-playbook -i inventory main.yml
PLAY [brambleweeny] ***
TASK [Gathering Facts] ***
ok: [192.168.86.47]
ok: [192.168.86.15]
ok: [192.168.86.158]
ok: [192.168.86.125]
TASK [Set the current MAC address for eth0.] ***
ok: [192.168.86.47]
ok: [192.168.86.15]
ok: [192.168.86.158]
ok: [192.168.86.125]
TASK [Set variables based on eth0 MAC address.] ***
ok: [192.168.86.47]
ok: [192.168.86.15]
ok: [192.168.86.158]
ok: [192.168.86.125]
TASK [Set up networking-related files.] ***
changed: [192.168.86.47] => (item={'template': 'hostname.j2', 'dest': '/etc/hostname'})
changed: [192.168.86.15] => (item={'template': 'hostname.j2', 'dest': '/etc/hostname'})
changed: [192.168.86.158] => (item={'template': 'hostname.j2', 'dest': '/etc/hostname'})
changed: [192.168.86.125] => (item={'template': 'hostname.j2', 'dest': '/etc/hostname'})
changed: [192.168.86.47] => (item={'template': 'hosts.j2', 'dest': '/etc/hosts'})
changed: [192.168.86.15] => (item={'template': 'hosts.j2', 'dest': '/etc/hosts'})
changed: [192.168.86.158] => (item={'template': 'hosts.j2', 'dest': '/etc/hosts'})
changed: [192.168.86.125] => (item={'template': 'hosts.j2', 'dest': '/etc/hosts'})
changed: [192.168.86.47] => (item={'template': 'resolv.conf.j2', 'dest': '/etc/resolv.conf'})
changed: [192.168.86.15] => (item={'template': 'resolv.conf.j2', 'dest': '/etc/resolv.conf'})
changed: [192.168.86.158] => (item={'template': 'resolv.conf.j2', 'dest': '/etc/resolv.conf'})
changed: [192.168.86.125] => (item={'template': 'resolv.conf.j2', 'dest': '/etc/resolv.conf'})
changed: [192.168.86.47] => (item={'template': 'dhcpcd.conf.j2', 'dest': '/etc/dhcpcd.conf'})
changed: [192.168.86.15] => (item={'template': 'dhcpcd.conf.j2', 'dest': '/etc/dhcpcd.conf'})
changed: [192.168.86.158] => (item={'template': 'dhcpcd.conf.j2', 'dest': '/etc/dhcpcd.conf'})
changed: [192.168.86.125] => (item={'template': 'dhcpcd.conf.j2', 'dest': '/etc/dhcpcd.conf'})
RUNNING HANDLER [update hostname] ***
changed: [192.168.86.47]
changed: [192.168.86.15]
changed: [192.168.86.158]
changed: [192.168.86.125]
RUNNING HANDLER [delete dhcp leases] ***
ok: [192.168.86.47] => (item=/var/lib/dhcp/dhclient.leases)
ok: [192.168.86.15] => (item=/var/lib/dhcp/dhclient.leases)
ok: [192.168.86.158] => (item=/var/lib/dhcp/dhclient.leases)
ok: [192.168.86.125] => (item=/var/lib/dhcp/dhclient.leases)
ok: [192.168.86.47] => (item=/var/lib/dhcpcd5/dhcpcd-eth0.lease)
ok: [192.168.86.15] => (item=/var/lib/dhcpcd5/dhcpcd-eth0.lease)
ok: [192.168.86.158] => (item=/var/lib/dhcpcd5/dhcpcd-eth0.lease)
ok: [192.168.86.125] => (item=/var/lib/dhcpcd5/dhcpcd-eth0.lease)
PLAY RECAP ***
192.168.86.47 : ok=6 changed=2 unreachable=0 failed=0
192.168.86.15 : ok=6 changed=2 unreachable=0 failed=0
192.168.86.158 : ok=6 changed=2 unreachable=0 failed=0
192.168.86.125 : ok=6 changed=2 unreachable=0 failed=0
Very nice indeed!
At this stage, as advised in Jeff's networking setup README, we can reboot the Pis with the following direct shell module based command:
-> ansible all \
> -i inventory \
> -m shell \
> -a "sleep 1s; shutdown -r now" \
> -b \
> -B 60 \
> -P 0
192.168.86.47 | CHANGED | rc=-1 >>
192.168.86.15 | CHANGED | rc=-1 >>
192.168.86.158 | CHANGED | rc=-1 >>
192.168.86.125 | CHANGED | rc=-1 >>
Note that this is the last time we'll be using these "as-is" IP addresses; when the Pis restart they'll have the static IP addresses defined in the vars.yml
file we saw earlier. So at this point, the addresses in the inventory need to be updated to reflect that, for future Ansible-based management of these machines.
This is now what's in the updated inventory
file:
[brambleweeny]
192.168.86.12
192.168.86.13
192.168.86.14
192.168.86.15
[brambleweeny:vars]
ansible_ssh_user=pi
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
The ansible_ssh_common_args
variable is still there, because we need it one more time. When the IP address of a remote host changes, ssh
will complain again, because the key isn't in known_hosts
. So a simple connection to each of the Pis with this StrictHostKeyChecking=no
option set will cause that complaint to be suppressed, and also cause the new keys to be stored:
-> ansible -m ping all -i inventory
192.168.86.12 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.86.13 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.86.14 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.86.15 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now we have another four lines in our ~/.ssh/known_hosts
file, reflecting the four Pis with their new keys, making a total of eight lines (one for each host when we had the DHCP-allocated IP addresses, and then one for each host with the new statically allocated IP addresses). To be thorough, it's probably a good idea to delete the first four lines, but more importantly, it's paramount that we remove the ansible_ssh_common_args
line from the inventory file now, to prevent future (and inadvertent) suppression of potentially real key warnings.
And that's it for this post. Ansible is indeed a powerful system, but taking the time to understand what's going on has taught me things about basic networking (and in particular some ins and outs of ssh
) that I'm glad I know now.
Moreover, I now have a nice set of four Pis set up from a basic networking perspective, ready for the next steps.
]]>All the episodes are live streamed on my YouTube channel and are then available on that same channel after the streams finish, as recordings. In this post are links to those recordings with a short description of each. You can easily spot the upcoming live streams and recordings as they always have the "Code at Home" background in the thumbnail, like this:
Here are the recordings of the live stream episodes so far. Click on the episode title link to get to the recording on YouTube.
Episode | Description |
---|---|
Fri 27 Mar 2020 Code at Home Ep.1 Setting up for our first challenge |
In this first episode we set up the tools that we need - the Project Euler and the repl.it websites. We also solve together the very first problem described on Project Euler: Multiples of 3 and 5. Code resources for this episode: CodeAtHome1 |
Mon 30 Mar 2020 Code at Home Ep.2 Fizz-Buzz and Fibonacci |
We start off by looking at the little "homework" challenge from last time, with a program to generate the output of a Fizz-Buzz game. Then the main part of this episode sees us take a first look at the Fibonacci sequence, what it is and how to work out the termns in that sequence, coding together a simple program to do that. Code resources for this episode: CodeAtHome2 and FizzBuzz |
Wed 01 Apr 2020 Code at Home Ep.3 Solving a Fibonacci related challenge |
Following on from the previous episode we take another look together at what we wrote already to generate the Fibonacci sequence, and rewrite it to make it better, using a generator function and even creating a function that produces other functions. With that, we go and solve Project Euler problem 2: Even Fibonacci numbers. Code resources for this episode: CodeAtHome3 |
Fri 03 Apr 2020 Code at Home Ep.4 Figuring out sentence statistics! |
Taking a break from numbers, we start to look at sentences and words, and how to parse them to grab basic data. In doing this we learn about arrays, and how to create and use them, even discovering functions and properties that are available on them. We also start to introduce the 'const' and 'let' keywords, and end with a gentle introduction to the super 'map' function. Code resources for this episode: CodeAtHome4 |
Mon 06 Apr 2020 Code at Home Ep.5 More on arrays and array functions |
Continuing on from the previous episode we start off by taking a look at character codes to understand the default sort() behaviour, digging in a little bit to the ASCII table. Then we look at a few more array functions, revisiting split() and map() and finally building a predicate function isPalindrome() that will tell us if the input is palindromic, a useful function that we'll need to solve Project Euler problem 4 in the next episode. Code resources for this episode: CodeAtHome5 |
Wed 08 Apr 2020 Code at Home Ep.6 Solving the palindromic products puzzle |
We start off by making slight improvements to our isPalindrome() function so that it will work with numbers as well as strings. Then we generate pairs of numbers in nested for loops, implementing an optimisation that will leave out duplicate calculations. We then check whether our code agrees with the answer to the sample in the problem (the product of two 2-digit numbers) and confident that it's OK, we put the code to work to calculate the main part of Project Euler problem 4, and it works! Along the way, we define a function that we then use to influence the behaviour of the sort() function. Between now and the next episode, think about how this function works, by looking at the Array.prototype.sort documentation at MDN. Code resources for this episode: CodeAtHome6 |
Fri 10 Apr 2020 Code at Home Ep.7 Looking at sort functions |
In this episode we dig more into why the default sort behaviour for our filtered palindromic products wasn't quite what we wanted, and looked into how the sort() function can use a 'compare function' to tell it how to behave. Then after creating a useful function to generate a nice list of random numbers on demand, we explore our own compare function implementation, passing it to sort() to influence the behaviour. Lots of fun! Code resources for this episode: CodeAtHome7 |
Mon 13 Apr 2020 Code at Home Ep.8 Finishing off sort, and introducing objects |
In this episode we expand our horizons with respect to arrays, and learn how you can have arrays with different types of data, and even nest arrays inside each other. We then move on to objects, which are an even more powerful way of representing and manipulating data. As a brief aside, we take an initial look at Project Euler Problem 4 - Names Scores, which we'll start to solve together next. Finally, we write another compare function to call sort() with, so that we can sort by referring to values of properties inside object structures. Code resources for this episode: CodeAtHome8 Data resources for this episode: episodes-A.js which we copied into our repl.it workspace. |
Wed 15 Apr 2020 Code at Home Ep.9 Looking at Project Euler problem Nr.22 |
We take a first proper look at the "Names Scores" problem, which is Nr.22 in the Project Euler series. There are a lot of things for us to do to solve the problem, but all of them definitely manageable. We spent a lot of this episode learning about how to open and read file contents, which we need to do to bring in the 5000+ first names that the problem is based upon. Code resources for this episode: CodeAtHome9 |
Fri 17 Apr 2020 Code at Home Ep.10 Continuing with Project Euler problem Nr.22 |
Now we're comfortable with reading in the data from the file from the previous episode, we can turn our attention to starting to pick off each task we need to achieve to solve the problem. In this episode we look at stripping off the double-quotes from each name, and how to go about calculating individual letter scores for each name. We also take a brief look at the raw data that is provided to us from the file read process, and work out what it represents, by translating between hexadecimal and decimal and looking up values in an ASCII table. Code resources for this episode (same as the previous episode): CodeAtHome9 |
Mon 20 Apr 2020 Code at Home Ep.11 An introduction to reduce() |
In this episode we took our time over getting acquainted with the powerful Array.prototype.reduce() function, the 'big sister' of Array.prototype.map(), Array.prototype.filter() and other similar array functions. Unlike map() and filter(), both of which expect to be passed functions that take a single parameter, and both of which produce an array as a result, the reduce() function expects to be passed a function that takes two parameters, and can produce a result of any shape (e.g. an array, an object or a scalar). We used reduce() to add up an array of numbers. Code resources for this episode (same as the previous episode): CodeAtHome9 |
Wed 22 Apr 2020 Code at Home Ep.12 Finishing off Project Euler Nr.22 |
In this episode we finish off the coding for Project Euler problem 22. In doing so, we look at a feature of the Array.prototype.map() function that we've previously ignored - the fact that not only does it pass the element to the function we provide to it, but also that element's position in the array that's being processed. We use this feature to get the position of the element, to work out the final score for each name. Great! Code resources for this episode (same as the previous episode): CodeAtHome9 |
Fri 24 Apr 2020 Code at Home Ep.13 Looking at Base 2 and our next challenge |
We start off by taking a peek at the next challenge which is Project Euler Nr.36 - Double-base palindromes where we have to check not only decimal but binary numbers for palindromic properties. So we take an excursion into binary, or Base 2, to understand how that works. Then we grab the isPalindrome() function from a previous CodeAtHome episode to reuse, and quite easily solve the problem together. Great! Code resources for this episode CodeAtHome13 |
Mon 27 Apr 2020 Code at Home Ep.14 Refactoring to improve our code |
There are nearly always opportunities to make improvements to code; whether that is for readability, performance, or other reasons. In this episode we looked at what we wrote for the solution we coded together on the previous episode and made a few improvements, by tweaking some values to make the calculation perform better, and by adding a "helper" function that we can use in lots of places and that encapsulates complexity that we can then forget about. Code resources for this episode CodeAtHome14 |
Wed 29 Apr 2020 Code at Home Ep.15 Continuing our refactoring journey using 'range' |
We refactored some of our code in the previous episode and in this one we continued to do so, creating our own utility module and moving functions into that, and then importing what we need to the main index.js file. Then before tackling our range() function we looked at how range() works in Python, so that we could emulate that, for consistency. Then we started to write a new version of our range() function accordingly. Code resources for this episode CodeAtHome15 |
Fri 01 May 2020 Code at Home Ep.16 recursion (noun): for a definition, see 'recursion' |
After finishing off our reworked range() function so that it behaved more like Python 3's range() function, we moved on to start looking at recursion - what it is and where it came from. It's a wonderful concept but does take some time to understand, so we started slowly by looking at how we might use a recursive function definition to add some numbers together - with no explicit looping! Code resources for this episode CodeAtHome16 |
Mon 04 May 2020 Code at Home Ep.17 A little more exploration of recursion |
In last Friday's episode we got our first taste of recursion, defining a recursive function sum() to add a list of numbers together. In this episode we had another look at that recursive definition to understand the pattern a bit more, with the base case and the main part, and then expanded our knowledge by creating a similar function mul() to multiply a list of numbers together, and making note of the (very few) things that had to change. Then we looked at what factorials were, and defined a recursive function to determine factorials for us. In fact, we defined it three different ways, ending up with a single-expression function that used the ternary operator. The definition was a little terse, but hopefully interesting! Code resources for this episode CodeAtHome17 |
Mon 11 May 2020 Code at Home Ep.18 Looking at our next coding challenge together |
We embark upon our last challenge for this series, which is Project Euler problem 52 Permuted Multiples, and take our time exploring the problem space in the REPL together. We build up solid little functions to help us out along the way, and to codify what our thoughts are, starting with digits(), contains() and sameLength(). We get to the stage where we can check through to see if all the digits in one number are in another number ... but we're not done yet, as we saw towards the end where we came across the 'subset' issue. We'll finish this off in the next episode, by looking at solving that (using sameLength()) and improving the comparisons with the array function every(). Code resources for this episode CodeAtHome18 |
Fri 15 May 2020 Code at Home Ep.19 Finishing off our challenge, improving the code |
We did it! We finished off and solved Project Euler problem 52 together. In finishing off, we completed the isPermutation() function, which we needed to check two things, in sequence - first, whether the length of each number was the same, and then (and only if the lengths were the same) whether the digits in the first number were in the second number. We also created the meetsRequirements() function, and indeed wrote two versions of it, which checked the actual requirements of the problem, for each number we could throw at it - which we did in a simple while loop at the end. Code resources for this episode CodeAtHome18 |
Parents: if you have any questions (during the streams or in between) please don't hesitate to contact me at qmacro+codeathome@gmail.com.
It's super important for us to stay at home right now, but that doesn't mean that we can't have some fun learning together online. If you have kids at home and want to give them a break from school work at the kitchen table, and they fancy learning a bit of programming for beginners, then this Code at Home idea might be helpful.
Who I am
First, a bit about who I am.
My name's DJ Adams and I live in Woodhouses, Failsworth, in Manchester. I am proud to work for SAP as a developer advocate, and I feel supported by them in this endeavour. I have a short bio page here: qmacro.org/about. I've been involved with teaching kids to code for a good while now, and you can read more about that down below.
The idea
The idea is that school children who are stuck at home can take part in some coding for beginners, by connecting to YouTube and joining a "Code at Home" live stream that I'll broadcast on a regular basis at a fixed time in the day, for an hour. I don't have this all planned out, I'm just going to start it and see how it goes and would welcome input from anyone. What I do know is that I want it to be approachable and for beginners, where the age range is around 11 and up.
We'll be learning basic programming together using the most popular language out there (by some measurements) - JavaScript, and we'll be starting off by using some simple problems from the Project Euler website. Here's problem number 1, titled "Multiples of 3 and 5", to give you an idea:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.
The plan is that we'll use each hour to solve little problems like this, by writing code in JavaScript.
Here are some questions and answers that hopefully will tell you what you need to know.
What's required?
To connect and join in, you'll need:
If you want to enable your kids join in and type along during the live stream, and want to use the same setup as I'll be showing, sign up to the following two websites:
Of course, if all you want to do is watch, that's fine and then you don't need anything other than your web browser. But it's fun to type the code in yourself on your own computer too and see the results.
How do we follow the schedule and get reminders?
I'll be broadcasting these Code at Home live streams on YouTube, on my channel at youtube.com/djadams-qmacro, so the simplest way is to head over to the channel, subscribe, and there you'll be able to see the upcoming Code at Home live stream episodes.
If you miss an actual live stream, don't worry, because they're all automatically recorded and made available on YouTube.
When and how often is this happening?
I'll start out by doing this regularly from 15:30 to 16:30 (UK time). This is so that any school work at home can be completed and out of the way during the day first.
I'll do the first one this coming Friday 27 March - and have set up the live stream ready on my YouTube channel, so you can set a reminder for it.
What will it be like?
On the live streams I'll be sharing my screen, showing the problem we're looking at solving, and showing what I'm typing in the repl.it website. There'll also be a little "picture-in-picture" of me on the camera. To get a rough idea, here's a still from last Friday's episode of another (more technical) live stream series that I run (called "Hands-on SAP dev with qmacro", which is for the SAP development and technical community):
I definitely encourage you parents to join too if you can, so you can satisfy yourself that this is something that might help. Perhaps you can help with the setup too.
I used to run a CodeClub at the Woodhouses Village School, I have been a volunteer at Manchester CoderDojo at MadLab and also at the Sharp Project in Newton Heath.
Here's a pic of me giving a Scratch session at Manchester CoderDojo:
I'm a STEM Ambassador who has given sessions at schools including Xaverian Sixth Form College, I've given a session at TEDx Oldham on Our Computational Future (making the argument for teaching our kids to code).
I've also helped out as a mentor at MadLab in Manchester during the Young Rewired State (YRS) events. Here's a still of me from the video about our activities for YRS 2013:
OK, what's the next step?
Head on over to the channel on YouTube, get set up for the first one and be ready to connect to the live stream this coming Friday 27 March. Here's the YouTube link for that first live stream episode: https://youtu.be/X7gtbWiHTBY.
If you have any questions, please reach out to me on Twitter (my handle is @qmacro) where I'd be more than happy to try and answer them.
Also, bookmark this blog post to be able to come back to it at a later date, in case there are updates I have added, with extra information, a change in schedule, things like that.
Please also wish me luck - while I've taught kids to code in many contexts, and I've live streamed in my professional life, I've never combined the two. I want this to be helpful in these trying times, and for that I will need your help too.Thanks!
]]>This is a post in the "Brambleweeny Cluster Experiments" series of blog posts, which accompanies the YouTube live stream recording playlist of the same name.
Next post in this series: Preparing the OS image
I've been skirting around the edges of experimentation with Raspberry Pis, to learn about clustering and containerisation using such technologies as Docker and Kubernetes. The topic area is fascinating in and of itself, but I think it's an important collection of subjects that one should know about in the SAP tech world too, given the cloud direction we're taking and how resources, services and applications are managed there.
I haven't gone too deep yet, having paused the project over the first couple of months of 2020 - I think one of the things holding me back was that I had stumbled my way through things at the start and wanted to hold off going further before I'd understood things a little more.
Here's a photo of the cluster that we built.
I know that writing about my activities helps my understanding, and so I thought I'd set out to write some posts about what I'm doing. As a bonus, they might help you too.
I've been learning a lot from two folks in particular, both of whom I came across, independently, from recordings of talks I saw them give.
Jeff Geerling gave a talk at DrupalCon in 2019 called Everything I know about Kubernetes I learned from a cluster of Raspberry Pis, which inspired me to put together my own cluster of Raspberry Pis. Clusters of Raspberry Pis are called brambles, which I think is a nice touch. Jeff named his cluster a Dramble, owing to the use of Drupal on it, and has some great resources at pidramble.com. Moreover, I've been learning about Ansible from Jeff too, generally but also specifically for setting up the Pis. I even bought his book Ansible for Kubernetes which I can definitely recommend.
Jeff documented his hardware setup over on the PiDramble site, and in particular I went for a version of his 2019 Edition which was to use power-over-ethernet (PoE) rather than running individual power cables to each Pi.
Alex Ellis has been doing some fascinating work in this space and sharing a ton of stuff on Kubernetes, serverless and in particular on OpenFaaS which he set up and runs as an open project. He's also a prolific writer and sharer, and I recommend you bookmark a few of his articles which are rich in content and inspiration. I saw a recording of a talk he gave with Scott Hanselman at NDC London in 2018: Building a Raspberry Pi Kubernetes Cluster and running .NET Core, which is definitely worth a watch.
To set my cluster up, here's what I ended up buying:
There are plenty of cases and mounting possibilities; just make sure, if you go for something different, that there's room for the PoE HAT mounted on top of each of the Pis.
I'm pleased with the result as there's a lot less cabling to deal with - it's just a single ethernet cable from the switch to each Pi, an ethernet cable from the switch to the main network, plus the power supply and cable to the switch, and that's it.
You can see this in the photo I took yesterday, which also shows an original Raspberry Pi Model B that I used as a console for various things.
The setup is compact and I can keep it on a shelf behind my main desk. I'm somewhat averse to fan noise, which does mean that I don't run the cluster all the time, as there are fans on the PoE HATs that come on now and again. But the lights are pretty!
The next post I want to write is about how I set up the Pis ready for the cluster experiments, and what I learned. Until then, you might want to take a look at the recording of a live stream from earlier this month where I just went ahead and followed Alex's blog post Walk-through ā install Kubernetes to your Raspberry Pi in 15 minutes. The key takeaway for me was that it was very easy.
The live stream was the first in what may turn out to be a series of cluster experiment live streams, so I've put the video into a playlist to help you find them.
The playlist is called Brambleweeny Cluster Experiments, where the name "Brambleweeny" is a conflation of the "Bramble" name for a Pi cluster, and the name of a computer in the Hitch Hiker's Guide To The Galaxy, the "Bambleweeny 57 Submeson Brain".
Until next time, happy clustering!
Next post in this series: Preparing the OS image
]]>This is a post in the "Brambleweeny Cluster Experiments" series of blog posts, which accompanies the YouTube live stream recording playlist of the same name. The video linked here is the one that accompanies this blog post.
Previous post in this series: Starting out with Raspberry Pi experiments
Next post in this series: Finding the Pis on the network
This post has been updated to reflect the new name of the OS -- Raspberry Pi OS -- which changed (from Raspbian) around May 2020.
There are many ways to prepare base OS images for your Raspberry Pi computers. In the past I've used various devices and software to write bootable images to SD cards, but I've settled on using balena Etcher that I read about in Alex Ellis's Walk-through ā install Kubernetes to your Raspberry Pi in 15 minutes.
The Pis in the cluster will be run headless (the only cable running to each of them will be an Ethernet cable). This has a couple of implications for us at this stage, which are (a) there's no point installing graphical tools or a full desktop, and (b) we'll be using remote access only.
There's no point in installing a graphical user interface (GUI) or windowing system on the Pis. That said, of course, with the power of X Windows we can have remote GUI windows but that's another story and a path we don't want to take for now.
There are different operating systems available for the Raspberry Pi, and at the time of this edit, Raspberry Pi OS (previously called Raspbian), is a Linux OS based on Debian Linux (currently Buster). Here, the "Lite" image, that comes without GUI software or a windowing system, is appropriate. This is convenient as the image is a lot smaller in size, too.
To access the headless Pis remotely, we'll be using Secure Shell (SSH). There's a bit of a chicken-and-egg problem though, in that we need to be able to configure the Pis to allow remote access via SSH, before we can make the connection. For that we'd need a keyboard and screen, to be able to log on, install and set up the SSH service.
However, headless use of Raspberry Pi computers is so common that there's a nice way to solve this dilemma, and it's described in the official documentation, in a section on the boot folder. Basically, the OS image that is to be written to the SD card for installation on the Pis has a partition named boot
. If you stick an SD card with a Linux image like Raspian Buster on it into your desktop computer or laptop, and automatic mounting is enabled, you'll see this boot partition mounted, and you can have a look inside.
If you place a file called ssh
in this boot partition, then when the image is inserted into a Pi and the Pi is booted, SSH will be enabled automatically and set up appropriately. Nice!
Most of the articles on the preparation of SD cards for Pis involve multiple steps: first, burn the OS image, then eject and re-insert the SD card to have the boot
partition from that new image automatically mounted, then create the ssh
file in that partition, and finally unmount the partition. This is fine for the occasional SD card preparation, but when preparing SD cards for an entire cluster, this can get tedious.
So I decided to embrace one of the three virtues of a programmer, namely laziness.
After downloading the Raspbian Buster Lite image, I unzipped the archive to reveal the actual image file, which I then mounted. In the mounted partition, I added the ssh
file, before unmounting it again. I then zipped the now-SSH-enabled image file up again, ready for writing to the SD cards.
On my macOS machine (which is one of the few devices I have that has an SD card slot), I unzipped the archive like this:
-> unzip 2020-05-27-raspios-buster-lite-armhf.zip
Archive: 2020-05-27-raspios-buster-lite-armhf.zip
inflating: 2020-05-27-raspios-buster-lite-armhf.img
Then I used the DiskImageMounter utility hdiutil
to mount the .img
image file (noting also that the boot
partition is only one of two partitions on the image - the other, of type "Linux", being the eventual root partition):
-> hdiutil mount 2020-05-27-raspios-buster-lite-armhf.img
/dev/disk3 FDisk_partition_scheme
/dev/disk3s1 Windows_FAT_32 /Volumes/boot
/dev/disk3s2 Linux
The boot
partition was made available at /Volumes/boot
, as we can see from what df
tells us:
-> df | grep disk3
/dev/disk3s1 516190 104290 411900 21% 0 0 100% /Volumes/boot
I could then add an empty ssh
file to the filesystem on that partition:
-> touch /Volumes/boot/ssh
If you'd also like your Raspberry Pi to connect to your WiFi network when it boots (which will often be the case, even for headless mode), then at this stage you can also add another file, and this time, it's not an empty file like the ssh
one, but one with configuration so that the Pi can connect to and authenticate with your WiFi network.
If you want to do that, create a file in the same boot partition as you created the ssh
file, called wpa_supplicant.conf
, and add the following configuration to it:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>
network={
ssid="<Name of your wireless LAN>"
psk="<Password for your wireless LAN>"
}
(This example is taken directly from this useful page: Setting up a Raspberry Pi headless).
If you're in the UK and wondering about the ISO-3166-1 country code that you need, it's "GB".
After that, I unmounted it:
-> umount /Volumes/boot
And then created a new zip archive:
-> zip 2020-05-27-raspios-buster-lite-armhf-ssh.zip 2020-05-27-raspios-buster-lite-armhf.img
adding: 2020-05-27-raspios-buster-lite-armhf.img
I could then use this new image archive file 2020-05-27-raspios-buster-lite-armhf-ssh.zip
in my use of balena Etcher, creating all four SD cards ready for the Pis in the cluster. Result!
Balena Etcher is great, but if, like me, you're more of a terminal person, you can also perform this step from the command line.
Whether or not you use balena Etcher or the command line, it's at this stage of course that you insert the SD card.
The steps are described well in Copying an operating system image to an SD card using Mac OS so here's a precis:
-> diskutil list
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk0
1: EFI EFI 314.6 MB disk0s1
2: Apple_APFS Container disk1 1.0 TB disk0s2
[...]
/dev/disk4 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *31.9 GB disk4
1: Windows_FAT_32 boot 268.4 MB disk4s1
2: Linux 31.6 GB disk4s2
-> sudo diskutil unmountDisk /dev/disk4
Password: ...
Unmount of all volumes on disk4 was successful
-> sudo dd bs=1m if=./2020-05-27-raspios-buster-lite-armhf.img of=/dev/rdisk4; sync
1768+0 records in
1768+0 records out
1853882368 bytes transferred in 122.702619 secs (15108743 bytes/sec)
-> sudo diskutil eject /dev/rdisk5
Disk /dev/rdisk5 ejected
You may be wondering why there's no Raspbian image available that already contains the ssh
file. That's because it would be a security risk; in other words, you have to explicitly enable SSH through this route if you want it; otherwise, the Pis stay locked down. That's the right approach.
After inserting the SD cards into the Raspberry Pis, and connecting up the Ethernet cables to power them up and have them boot the images for the first time, we can see that this SSH configuration action was successful:
-> ssh 192.168.86.53
The authenticity of host '192.168.86.53 (192.168.86.53)' can't be established.
ECDSA key fingerprint is SHA256:jFgPSwjEQsCSUx+nJcZ6ub9EhoGC1I1vSX5uSvVc1YE.
Are you sure you want to continue connecting (yes/no)?
In the next post, we'll find out how I discovered the IP address(es) to use to connect, but for now, this is a great start - the SSH service responded to my request to connect (the "authenticity" message is just my machine saying "hey, I don't recognise this remote host - are you sure you want to proceed?") - we're all ready to start setting up our Pis for some clustering goodness!
Next post in this series: Finding the Pis on the network
]]>This is a post in the "Brambleweeny Cluster Experiments" series of blog posts, which accompanies the YouTube live stream recording playlist of the same name. The video linked here is the one that accompanies this blog post.
Previous post in this series: Preparing the OS image
Next post in this series: Initial Pi configuration via Ansible
Having booted the Pis in the cluster using the OS image prepared earlier, we now need to find them so that we can continue with the setup.
What does that mean? Well, the Pis will have requested IP addresses via DHCP. In my case, I run DHCP via my Google Wifi setup, and have a range set up for DHCP leases. While I can guess what the IP addresses might be, it's not scientific. I could look at the Google Wifi app on my phone, and go through manually searching for the devices that called themselves something that includes the string "raspberrypi", then looking at the details to reveal the IP addresses. But that sounds too much hard work, and not something I'd learn from.
Using nmap
I could use the nmap
command that Alex uses, which has a wealth of possibilities. If all I wanted to do was to find the IP addresses, this would be what I'd want to use, specifying the -sn
option (which means "don't do a port scan", previously the option was -sP
) for my home network (192.168.86.0/24), which would give results that look like this:
-> nmap -sn 192.168.86.0/24
Starting Nmap 7.70 ( https://nmap.org ) at 2020-03-22 17:08 GMT
[...]
Nmap scan report for chromecast-audio.lan (192.168.86.43)
Host is up (0.037s latency).
Nmap scan report for amazon-517762033.lan (192.168.86.47)
Host is up (0.10s latency).
Nmap scan report for pimodelb.lan (192.168.86.49)
Host is up (0.0022s latency).
Nmap scan report for 192.168.86.15
Host is up (0.026s latency).
Nmap scan report for raspberrypi.lan (192.168.86.54)
Host is up (0.0039s latency).
Nmap scan report for 192.168.86.47
Host is up (0.0033s latency).
Nmap scan report for 192.168.86.125
Host is up (0.0023s latency).
[...]
Nmap done: 256 IP addresses (28 hosts up) scanned in 2.38 seconds
->
If you're wondering about the way the network is written, i.e. 192.168.86.0/24, here's how that works. An IP (v4) address is a dotted quad number, i.e. four single byte values (range 0-255) that specify a combination of network number and host number. The number after the slash (24 in this case) tells you how many bits wide the mask for the network number is, with the remaining bits being the host number. Bearing in mind that the resolution of four bytes gives a total address space of 32 bits (4 x 8), 24 signifies that the first three numbers (192.168.86) represent the network number, and the fourth represents the host number(s).
This is how I see 192.168.86.0/24 in my mind:
192 168 86 0 decimal
11000000 10101000 01010110 00000000 binary
11111111 11111111 11111111 00000000 network mask (24 bits)
| | | |
+------------------------+ +------+
| |
network host
Given that I can work out what IP addresses might already be allocated on my network, I could eventually reach the conclusion that the following IP addresses were the four new Pis in the cluster: 192.168.86.53, 192.168.86.54, 192.168.86.55 and 192.168.86.56.
But that feels a little fuzzy to me.
Moreover, I'm learning about Ansible from Jeff Geerling, as I mentioned in Starting out with Raspberry Pi experiments, and want to use some of the Ansible goodness for the setup of the Pis, as explained in his wiki page Network the Raspberry Pis. Jeff has a nice networking setup section in his GitHub repo geerlingguy/raspberry-pi-dramble which I recommend you have a look at. In this networking setup he has a playbook (a series of tasks for Ansible to carry out on a set of remote hosts) main.yml
that sets up networking, including allocating specific IP addresses to specific hosts.
How are these hosts identified and defined? In a vars.yml
file, an example of which is provided in that networking setup section. It contains a mapping of MAC addresses to hostname and IP address pairs, which is exactly what I want. In other words, I want to give each of the Pis in the cluster a hostname, and a specific IP address that will persist and that I can remember.
I'm going to jump ahead and show you what's in the vars.yml
file for my Pi cluster setup here:
---
# Mapping of what hardware MAC addresses should be configured with specific IPs.
mac_address_mapping:
"dc:a6:32:60:60:95":
name: brambleweeny1.lan
ip: "192.168.86.12"
"dc:a6:32:60:60:77":
name: brambleweeny2.lan
ip: "192.168.86.13"
"dc:a6:32:60:60:44":
name: brambleweeny3.lan
ip: "192.168.86.14"
"dc:a6:32:60:60:e3":
name: brambleweeny4.lan
ip: "192.168.86.15"
# Nameservers to use in resolv.conf.
dns_nameservers:
- "192.168.86.5"
I want to give the four Pis host numbers in the range 12-15, and name them after the cluster name "Brambleweeny". I also want to tell them to use a local DNS server at 192.168.86.5 for domain name resolution. This is a tiny Raspberry Pi Zero W running the excellent Pi-hole.
But putting aside the IP addresses for a moment - how did I find out the MAC addresses?
Using arp-scan
Well it pleases me to say that I found them out using a bit of technology that dates back to the early 1980s, and relates directly to one of the fundamental and critical parts of the Internet protocol suite - the Address Resolution Protocol (ARP). Essentially, ARP provides a mapping between the link layer address of a network device (i.e. a MAC address in this case) and the internet layer address (i.e. the IP address in this case).
To work with ARP data there's a venerable program called arp-scan
which is standard on real operating systems such as Linux. It's a system binary, which means it lives in /usr/sbin
("sbin" is short for "system binaries") which means, more or less, that it's for root use only.
Running arp-scan
on this network address 192.168.86.0/24 reveals almost exactly what we're looking for: not only the MAC addresses, but the IP addresses that are associated with them.
Here's what the output of running arp-scan
on my network looks like (I've modified parts of the addresses for security reasons):
-> sudo arp-scan 192.168.86.0/24
Interface: eth0, datalink type: EN10MB (Ethernet)
Starting arp-scan 1.9.5 with 256 hosts (https://github.com/royhills/arp-scan)
192.168.86.1 70:3a:cb:2e:c5:fb (Unknown)
192.168.86.22 70:3a:cb:32:0a:38 (Unknown)
192.168.86.32 00:26:2d:18:d0:12 Wistron Neweb Corporation
192.168.86.31 00:0e:58:68:59:33 Sonos, Inc.
192.168.86.33 f0:72:ea:30:59:e3 (Unknown)
192.168.86.37 64:16:66:40:5f:c3 (Unknown)
192.168.86.36 6c:ad:f8:6c:5a:3d AzureWave Technology Inc.
192.168.86.39 18:b4:30:ec:11:2a Nest Labs Inc.
192.168.86.39 18:b4:30:ec:51:2a Nest Labs Inc. (DUP: 2)
192.168.86.28 00:0e:58:8a:c6:92 Sonos, Inc.
192.168.86.15 dc:a6:32:60:60:77 (Unknown)
192.168.86.47 dc:a6:32:60:60:95 (Unknown)
192.168.86.48 9c:32:ce:7e:15:a1 (Unknown)
192.168.86.158 dc:a6:32:60:60:44 (Unknown)
192.168.86.125 dc:a6:32:60:60:e3 (Unknown)
192.168.86.20 6c:56:97:64:1d:6f (Unknown)
192.168.86.47 fc:65:de:08:1b:69 (Unknown)
192.168.86.44 f4:f5:d8:ed:13:fa Google, Inc.
192.168.86.43 54:60:09:eb:1a:dc Google, Inc.
192.168.86.21 a4:77:33:25:13:14 Google, Inc.
192.168.86.187 5c:aa:fd:24:11:84 Sonos, Inc.
192.168.86.24 3c:15:c2:b3:10:03 Apple, Inc.
192.168.86.29 20:16:b9:c2:1d:f1 (Unknown)
192.168.86.250 5c:aa:fd:02:15:48 Sonos, Inc.
192.168.86.26 20:df:b9:41:1d:24 (Unknown)
192.168.86.35 00:0e:58:f3:1e:6c Sonos, Inc.
192.168.86.40 1c:f2:9a:64:1c:22 (Unknown)
29 packets received by filter, 0 packets dropped by kernel
Ending arp-scan 1.9.5: 256 hosts scanned in 4.639 seconds (55.18 hosts/sec). 27 responded
->
Gosh that's nice, but how do I tell which are my new Raspberry Pis in the cluster?
Well, to answer that, we need to understand how MAC addresses are formed. Each address is a series of byte values, in hexadecimal. They're assigned to hardware devices, most commonly to network interfaces. In the case of the Pis, this is the RJ45 Ethernet port you can see in the top right of this picture of a Raspberry Pi 4:
Significantly, the first three bytes in a MAC address represent the hardware manufacturer, via a so-called Organisationally Unique Identifier (OUI). And if we look at the canonical list of OUIs we see that there's an entry for the Raspberry Pi organisation thus:
DC-A6-32 (hex) Raspberry Pi Trading Ltd
DCA632 (base 16) Raspberry Pi Trading Ltd
Maurice Wilkes Building, Cowley Road
Cambridge CB4 0DS
GB
How convenient!
Incidentally, the building in this address is named after one of the fathers of modern computing, Maurice Wilkes, who worked on one of the earliest stored-program computers EDSAC, and who also invented microprogramming, which was first described in Manchester in 1951.
So all we have to do is reduce the output of arp-scan
by filtering the output to only show devices manufactured by the Raspberry Pi organisation, which has the OUI DC-A6-32, or, as it's more commonly written, dc:a6:32:
-> sudo arp-scan 192.168.86.0/24 | grep dc:a6:32
192.168.86.47 dc:a6:32:60:60:95 (Unknown)
192.168.86.15 dc:a6:32:60:60:77 (Unknown)
192.168.86.158 dc:a6:32:60:60:44 (Unknown)
192.168.86.125 dc:a6:32:60:60:e3 (Unknown)
->
Bingo!
DHCP leases had indeed been given out for hosts 47, 15, 158 and 125 in the 192.168.86.0/24 network, and there we have each associated MAC address too.
So in preparing for the networking setup, the MAC addresses went into the vars.yml
file as shown earlier, with the to-be IP addresses.
Of course, we need to help Ansible find the Pis to make this configuration, and for that we need to specify a list of the existing IP addresses, which we now also have. Those go into the Ansible inventory, effectively a list of hosts in this simple case.
Based on the example.inventory
in Jeff's repository, here's what we need for the setup in the case of our Brambleweeny cluster:
[brambleweeny]
192.168.86.47
192.168.86.15
192.168.86.158
192.168.86.125
[brambleweeny:vars]
ansible_ssh_user=pi
I've changed the name of the group from "Dramble" to "Brambleweeny", and of course adjusted the IP addresses to the as-is ones that exist right now. There's also a variable in this file, ansible_ssh_user
, but we'll ignore that for now.
At this stage, we have found the Pis on the network, and gathered the appropriate information to supply to Ansible so that we can ask it to make the networking setup on each host, on our behalf.
We'll get to that in the next post. Until then, happy arping!
Next post in this series: Initial Pi configuration via Ansible
]]>I'm experimenting with a cluster of Raspberry Pi computers, and sharing that experimentation as I make my slow progress towards enlightenment.
That sharing comes in two forms: blog posts, which you'll find listed here, and live streams, which you can find over on my YouTube channel at https://youtube.com/djadams-qmacro. The channel's home page is where you'll see upcoming live streams, and the recordings are available in a playlist, also called Brambleweeny Cluster Experiments.
Here are the posts:
...
Share & enjoy!
]]>Update (08 Nov): This blog post is available in audio format as an episode on the Tech Aloud podcast. Also, I recorded a CodeTalk episode on this subject with Ian Thain ā watch it here: https://www.youtube.com/watch?v=5ffTjFdjs8M.
If you read one technical article today*, make it the About CAP page in the online documentation, which starts with the following overview:
The "SAP Cloud Application Programming Model" is an open and opinionated, framework of languages, libraries, and tools for building enterprise-grade services and applications. It guides developers through proven best practices and a great wealth of out-of-the-box solutions to recurring tasks.
Key for me here is that the design principles that are at CAP's core (open and opinionated, zero lock-in, non-intrusive and platform agnostic) and that have influenced what CAP is and what it can do for us, explain why it is fundamental.
*(If you don't have time to read it, it's also available as a podcast episode in the Tech Aloud podcast here: SAP Cloud Application Programming Model (About CAP) ā SAP ā September 2019.
CAP provides the substrate within or upon which actual services and applications can be designed and built, cloud-ready.
It is the fresh, fertile and well-watered soil in which we can grow our flowers and food.
It is the backbone which is the stable base that connects everything together, the trunk from which all branches can flourish.
To bring these metaphors a little closer to the subject at hand, CAP is like the combination of mores and spoken languages upon which society is built ā¦ or, in a narrower computing context, it's the programming language that we use to express our solutions.
What this suggests to me is that if we see CAP in this way, we should master enough of it to express ourselves, to start building services, to plant seeds and nurture them into blossom, to build upon and build with.
Just like we learn a language with which to express ourselves, whether that language is English, international sign language, or APL, we should make a point of learning what CAP is, how it works, what it can do for us, and how to embrace and wield the power that it gives us as developers.
CAP is not an end in itself, it is a means to other ends. And my goodness, in my experience, what a means it is!
It's hard now to remember the times when the effort to create a functioning read-write OData service was so great that proof-of-concept projects didn't even get off the ground. Now, literally with less than ten lines of declarative code you can spin up a fully formed CRUD+Q OData service, and what's more, adding custom handlers to augment the standard handlers is also only a few lines of code away.
Similarly, I had never really seriously attempted mocking a business service from the SAP API Business Hub before, as the effort was too great. Now with CAP it's a matter of minutes.
It's hard to remember what it was like to explore how annotations actually drive Fiori elements, because of the complexity involved in establishing where to store and how to serve up annotations along with an existing OData service. With CAP you just add them to a file, using Notepad or similar, and you're done. The time between tweaking annotations and refreshing your Fiori elements app to see what those tweaks do is now measured in seconds (and yes, I do that, I'm just like you :-)).
I can't actually remember a time when I didn't have to think about specific persistence layers and machinery when prototyping a service, until CAP came along.
And the mental heavy lifting previously needed to consider how I might go about building a solution that involved persistence, built-in extensibility, enterprise messaging and more ā¦ well, as a regular developer with limited brain power, I'm now in a much better position to create solutions like that.
With the building blocks such as the family of Domain Specific Languages*, with the convention-over-configuration approach, with the first class support for today's most popular language, CAP helps you start smart, start your development project at a level far higher up, far nearer the business domain, than you could have started previously.
You could say that this higher level starting point puts you closer to the cloud before you've even begun!
*(See the CDS language reference documentation to learn more about the CAP DSLs.)
So, my advice is ā learn CAP, understand how to make use of the superpowers it gives you, and be mindful of its key role as a development substrate letting you focus on the business domain at hand.
And, in the nicest possible way, just as for me my knowledge of the English language and and my understanding of social rules and customs fades into unimportance when interacting with my fellow human beings, consider CAP as unimportant in the same way. Fundamental, something you should learn and be able to make full use of, but a means to an end.
]]>Earlier this morning, on my travels, a tweet from David Ruiz Badia caught my eye:
"Artificial intelligence + code repository on git = Autocomplete models when coding by @timoelliott Experience Intelligent Summit Barcelona #Intelligententerprise #experiencemanagement #sapchampions @SAPSpain"
It wasn't the text or the main subject of the tweet, but the code on the slide that was shown in the accompanying picture:
.
Luckily the code on the slide is clear enough to read, so I won't reproduce it here.
It's Ruby, and fairly simple code to sum up the total number of lines in files, by file type (extension), in a given directory. So for example, assume you have the following files, each containing the indicated amount of lines:
a.txt (3 lines)
b.txt (4 lines)
c.dat (1 line)
d.txt (2 lines)
e.dat (5 lines)
What you want this program to produce is something like this:
dat -> 6
txt -> 9
Thinking of files and lines immediately switched my brain to shell mode, where one part of the shell philosophy (do one thing and do it well - also attributable to the Unix philosophy in general) gives us the wc program, which produces word, line, character and byte counts for files (and that's about it).
Another part of the philosophy is "small pieces loosely joined", which, in conjunction with the pipeline concept, and combined with the wonderful simplicity of STDIN (standard input) and STDOUT (standard output), gives us the ridiculously useful ability to send the output of one command into the input of another.
This ability might seem somewhat familiar, particularly if you've been discovering the fluent interface style of method chaining in API consumption, as recently shown in how the SAP Cloud SDK is used - here's an example from a tutorial "Install an OData V2 Adapter" which is part of one of this year's SAP TechEd App Space missions "S/4HANA Extensions with Cloud Application Programming Model":
BusinessPartnerAddress
.requestBuilder()
.getAll()
.select(
BusinessPartnerAddress.BUSINESS_PARTNER,
BusinessPartnerAddress.ADDRESS_ID,
BusinessPartnerAddress.CITY_NAME,
)
.execute({url:'http://localhost:3000/v2'})
.then(xs => xs.map(x => x.cityName))
.then(console.log)
Anyway, I'm always happy for opportunities to practise my basic shell skills and Unix commands, so I thought I'd have a go at "finishing the sentence" in my mind, the one that had started with wc
, and write a pipeline that would do the same thing as that lightly pedestrian Ruby code.
I realise I don't have much context as to why the code is there or why it looks like it does - it relates to machine learning powered autocomplete features in code editors, so was likely something simple enough for the audience to understand but verbose enough to be of use as an example.
Starting with wc
on the files, using the -l
switch to request lines, we get this:
ā wc -l *
3 a.txt
4 b.txt
1 c.dat
2 d.txt
5 e.dat
15 total
ā
There's nothing other than these 5 files in this directory, as you might have guessed.
That's something we can definitely work with - we need to sum the numbers by file extension (txt
or dat
in this example).
If we're to pass the output of wc
directly into another program to do the summing, we may trip ourselves up because of this line at the end of the wc
output:
15 total
We don't want the value 15 from that total summary line to be included. So we use another program to strip that line out, and pipe wc
's output into that.
On GNU-based Unix or Linux systems, we can use the head program to do that for us. head
will display the first N lines of a file. The GNU version of head
contains a flag -n
that can take a negative number, to work backwards from the end of the file, so that we can do this:
ā wc -l * | head -n -1
3 a.txt
4 b.txt
1 c.dat
2 d.txt
5 e.dat
ā
The nice thing about this approach is that we will always strip off just the last line.
Observe how the output of wc
has been piped into the input of head
. If you wanted to do this in a very inefficient but more or less equivalent way, using an intermediate file, you'd have to do this:
ā wc -l * > intermediatefile
ā head -n -1 intermediatefile
3 a.txt
4 b.txt
1 c.dat
2 d.txt
5 e.dat
ā
Here, the >
symbol is redirecting the STDOUT from wc
to a file called intermediatefile
, and the head
program reads from STDIN, or, if a filename is specified as it is here, will read from that file.
On macOS, a decendant of BSD Unix, the GNU version of head
isn't available, and so we cannot avail ourselves of the -n -1
approach. Instead, we'd use the grep command which in its basic form prints lines that match (or don't match) a pattern. We can use grep
's -v
switch to negate it, i.e. to get it to print lines not matching the pattern, and specify " total$" as the pattern to match (the dollar sign at the end of the match string is a regular expression symbol that anchors the text " total" to the end of the line from a match perspective). While we do have to be careful not to have a file called total
, it will do the job for us here:
ā wc -l * | grep -v ' total$'
3 a.txt
4 b.txt
1 c.dat
2 d.txt
5 e.dat
ā
Same result. Nice.
Now we have some clean and predictable input to pass to another program. We'll use awk
which is a very useful and powerful text processing tool, along with its sibling sed
. The wonderful programming language Perl took inspiration from both awk
and sed
and other text processing tools, as it happens.
It may be interesting to you to know that
awk
's initials are from the authors, Aho, Weinberger and Kernighan, three luminaries from Bell Labs, the birthplace of C and Unix. On the Tech Aloud podcast, you'll find an episode entitled "C, the Enduring Legacy of Dennis Ritchie - Alfred V. Aho - 07 Sep 2012" to which you may enjoy listening.
awk
reads lines from STDIN (standard input) and is used often to rearrange fields in those lines, or otherwise process them. In this case we're going to read in the output from our pipeline so far, and get awk
to start out by splitting each one up into separate pieces, so that we go from this:
3 a.txt
4 b.txt
1 c.dat
2 d.txt
5 e.dat
to this:
3 a txt
4 b txt
1 c dat
2 d txt
5 e dat
In this new state we can now distinguish the file extensions, and thereby have a chance to sum the line counts by them.
To do this, we use the -F
switch which allows us to define what we want the "field separator" to be, what character (or which characters) we want awk
to split each line on. In our case we want to split on space, and also period. So we do the following, specifying a simple in-line awk
script as the main parameter:
ā wc -l * | head -n -1 | awk -F '[\ .]' '{print $2, $NF}'
3 txt
4 txt
1 dat
2 txt
5 dat
We have to escape the space in the list of delimiters, hence the
\
escape character.
What the script ({ print $2, $NF }
) is doing is simply printing out field number 2 and the last field. We specify field number 2 ($2
) because there's an empty field number one, because we've split on space. We get the last field by specifying $NF
, which represents "the number of fields".
Note that for every line of input, we get a line of output. This is deliberate - awk
executes the bit in curly brackets for each line it processes. But we can also get awk
to do something at the beginning, or at the end, of processing. Consider a change from the simple script that we have now:
{ print $2, $NF }
to this:
{ counts[$NF]+=$2 }
END { for (ext in counts) print ext, "->", counts[ext] }
This will accumulate the individual line counts for each file into an associative array counts
keyed by the extension (e.g. txt
). The { counts[$NF]+=$2 }
part runs for each line coming in on STDIN. Then at the end of processing, the block with the for
loop is executed, printing out the totals by extension.
Let's see this in action:
ā wc -l * | head -n -1 | awk -F'[\ .]' '{counts[$NF]+=$2}
> END {for (ext in counts) print ext, "->", counts[ext]}'
dat -> 6
txt -> 9
The
>
symbol at the start of the second line is the "continuation" character, put there by the shell to tell me it was expecting more input, after I'd hit enter at the end of the first line (because I hadn't yet closed the opening single quote).
And there we have it. We only used a few features of awk
but they are certainly powerful enough for this task, when combined into a pipeline with wc
and head
(or grep
).
I'd encourage you to spend some time in a text-based Unix or Unix-like environment. All the major PC operating systems today have such environments available, either directly (with Linux and macOS) or indirectly via VMs (with ChromeOS and Windows 10).
The future is terminal. Happy pipelining!
]]>There's such a rich seam of material out there to read and learn from that it's more or less impossible to keep up. Technical articles and blog posts are published day in day out, and I can only find time to read so many. I'd been thinking that if I could find a podcast where articles and blog posts are simply read out, aloud, I could digest a little more via audio while on the move, when travelling or even just going for a walk.
I couldn't find such a podcast, so I decided to start one myself. It's called "Tech Aloud" and is hosted on Anchor FM. It's already available in many of the podcast systems:
If you want to add it to your own podcast listening workflow, the RSS feed URL is: https://anchor.fm/s/e5dc36c/podcast/rss.
You can learn more about this new podcast by listening to the first episode "Welcome to Tech Aloud". I've also written a post on the SAP Community about this podcast, which provides a bit more context: "Tech Aloud podcast - an introduction". Moreover, I'm encouraging folks to submit content suggestions during the SAP TechEd Barcelona week in this post: "Submit suggestions for Tech Aloud during SAP TechEd Barcelona".
I have a system for bookmarking articles and posts that I want to read, and this podcast is me basically dipping into that list, picking entries out that seem suitable for audio, and recording them as episodes in this new podcast. How do I pick which articles or posts to read aloud? Well, there are a few rules of thumb:
First, each audio episode should be fairly short (say, up to 10 minutes), so super long essays are not going to make the cut, no matter how interesting they are (I reserve the right to make exceptions, though!).
Second, the content must be mostly text; in other words, if it's code or diagram heavy, it's not going to translate well into a pure audio format.
And finally, the range of topics is wide, and basically whatever takes my fancy.
On that last point, I might even think about letting you submit requests (for certain articles or posts of your choosing to be included). More on that another time, perhaps.
Anyway, I've also created a GitHub repo to manage my activities relating to the production of this podcast - mainly listing the articles or posts that I want to read out loud, and which ones I've done in a project ("Episodes") in that repo.
Ok, one final note - my reading aloud of any given article or blog post does not represent any sort of endorsement from me - it's just that I find it interesting.
Oh yes, and don't forget to subscribe, and if you're subscribing to this one and interested in SAP tech, you should also subscribe to the Coffee Corner Radio podcast which has a whole host of great stuff on subjects within the SAP tech ecosphere.
]]>I'm listening to and rather enjoying the Audible Original "Alien: Sea Of Sorrows". Beyond being a good story (along with its related titles such as "Out Of The Shadows" and "River Of Pain"), the audio genre is new to me - rather than just being a narration, it's a full-on audio action experience. Definitely recommended.
Anyway, in Chapter 2, Rawlins, Morris and someone else is inspecting the Deep Space Mining Orbital spaceship "Marion", destroyed in unusual circumstances (in "Out Of The Shadows", actually).
Closely examining a fragment of the ship, the dialogue between the characters goes something like this:
... "OK, looks like a section from a docking arm."
Morris: "Bay three docking arm from the DSMO Marion, to be exact."
Rawlins: "You can tell which docking bay this came from? Is it that obvious?!"
Morris: "No, there's a serial number on the end plate."
There's a lovely contrast between how impressed Rawlins is, and how Morris's response is honest and matter-of-fact, and how it conveys an almost but not quite imperceptible playfulness on the part of the author, Dirk Maggs. Because it reminded me of another passage, from my favourite series of all time, Douglas Adams's Hitch Hiker's Guide To The Galaxy, in particular the original radio series from the late 1970s. In Part 2, or rather "Fit the Second", there's this exchange between Arthur and Ford, who have just been picked up from open space and found themselves inside the Heart of Gold spaceship:
Ford: "I think this ship is brand new, Arthur."
Arthur: "How can you tell? Have you got some exotic device for measuring the age of metal?"
Ford: "No, I just found this sales brochure lying on the floor."
Again, there's the banality of the answer which wonderfully contrasts with alternative and potentially amazing explanations.
I can't help but think that the conversation on DSMO Marion is somewhat of a small homage to this classic exchange.
]]>First, before reading the first paragraph of the post which explains where the word comes from, I was able to get a general idea of what the word was supposed to mean. I looked into where the word came from, and it seems to have started appearing in a few sources, including:
There's even a GitHub organisation and repo with this name too.
The thing is, to me (and I guess I should declare myself a pedant and proud of it) this neologism is ugly and not really thought through, for these reasons:
It's a hybrid of Greek (poly) and Latin (nimbus). There are of course other words that are hybrid, but if we're going to create new compound words, surely we want to avoid hybrids? After all, the original meaning of the word "hybrid" has rather negative connotations.
If we're going to go for tacking a Latin word onto the end of a Greek one, let's at least make sure it's the right Latin word. "Nimbus" doesn't mean simply "cloud" in Latin, it means "dark cloud", like a thunder or storm cloud. Perhaps not exactly the meaning we want to convey when talking about something positive.
The word seems to be employed mostly as an adjective right now. Using the raw nominative singular form of the noun "nimbus" is hardly appropriate, then. At least change the ending to a more traditionally adjectival suffix "-ic", in other words "polynimbic".
I can't see this word without my face contorting into a grimace, especially when it would have been easy to come up with something a lot less unpleasant.
My suggestion? Polynephic. Nephic from the Greek for cloud: Ī½ĪĻĪæĻ.
Thanks for your attention :-)
]]>In the 1970's I had a book which I cherished: The Observer's Book of Commercial Vehicles. It was pocket-sized & well thumbed, and listed various vans and trucks. I could identify a Volvo F88 tractor unit from 500 paces.
I guess the interest in spotting and identifying things of interest has never left me, and so I find myself delighting in various examples of style with respect to expressing oneself in code. These days it's mostly in the context of JavaScript; unlike other languages, JavaScript is not only multi-paradigm but extremely malleable, making it possible to leave behind a trace of one's character, often by accident.
This post is to record some of those characterful or otherwise interesting expressions in code, taken predominantly from the Node.js codebase for the SAP Cloud Application Programming (CAP) Model.
Who knew there could be so much richness to write about in a single file of fewer than 100 lines? This is the main server mechanism, in the form of a server.js
file in @sap/cds
, that uses the Express web framework to serve resources both static and dynamic.
If you want to see this code in context, or even get it running in your favourite debugging setup to examine things step by step, pull the @sap/cds
package from SAP's NPM registry, dig in to the sources, and have a look.
const { PORT=4004 } = process.env
return app.listen (PORT)
This is a lovely example of object destructuring in action, with a bonus application of the default values option. Plus of course the appropriate use of a constant. This takes whatever value the PORT environmental variable might have, falling back to a default of 4004 if a value wasn't set.
The other thing that stands out to me is a particular affectation that I'm still not sure about - the use of whitespace before the brackets in the app.listen
call. It does make the code feel more like a spoken language, somehow; here's another example of that use of whitespace from the same file:
await cds.serve (models,o) .in (app)
I think I like it.
app.get ('/', (_,res) => res.send (index.html)) //> if none in ./app
The use of the undercore (_
) here is not particular to JavaScript nor uncommon, but it's nice to see it in use here. As a sort of placeholder for a parameter that we're not interested in using in the function that is passed to app.get
, the underscore is for me the perfect character to use.
If you're familiar with HTTP server libraries, you'll be able to guess what that parameter is that is being ignored. An HTTP handler function is usually passed the incoming HTTP request object, and the fledgling HTTP response object. The response object is the only thing that is important here, so the signature is
(_,res) => ...
rather than
(req,res) => ...
Did you notice something odd in that previous example? There was a call to res.send
like this:
res.send (index.html)
At first I was only semi-aware of something not quite right about the way that appeared. But then looking further down in the code, I suddenly realised. index.html
is not a filename ... it's a property (html
) on an object (index
)! Further down, we come across this, where it's defined:
const index = { get html() {
...
return this._html = `
<html>
<head>
...
`
}}
I've not seen the use of a getter
in the wild that often. According to the MDN web docu on 'getter':
"The get
syntax binds an object property to a function that will be called when that property is looked up".
Put simply, you can define properties on an object whose values are dynamic, resolved by a call to a function. Of course, you can also define functions as property values, but that would mean that you'd have to use the call syntax, i.e. index.html()
in this case. But given the emphasis on something that's more readable than normal, I can see why a getter was used here. Although I think it might take a bit of getting used to.
Now the cat is out of the bag, it gets everywhere! I wanted to include this example of destructuring because at first I was scratching my head over the line, but then after staring at it for a few seconds I realised what I was looking at.
const { isfile } = cds.utils, {app} = cds.env.folders, {join} = require('path')
const _has_fiori_html = isfile (join (app,'fiori.html'))
(I'm including the second line only to give a bit of usage context).
Part of the reason this had me wondering was simply due to the whitespace - I guess the author was in a hurry as the spacing is not consistent. I thought there was something special about the first pair of braces around isfile
, but in fact it's just three separate constant declarations, all on the same line: isfile
, app
and then join
.
The value for each is resolved through destructuring. For example, the app
constant comes from the cds.env.folders
property which itself has a number of child properties, one of which is app
. cds.env
is part of CAP's Node.js API, and provides access to an effective environment computed from different layers of configuration.
Going one example further, the join
constant ends up having the value of join
in the Node.js builtin path
package (the value is a function), as you can see in this snippet from a Node.js REPL session:
=> node
> require('path')
{ resolve: [Function: resolve],
normalize: [Function: normalize],
isAbsolute: [Function: isAbsolute],
join: [Function: join],
relative: [Function: relative],
...
Anyway, while there's more to explore and pore over, I'll leave it here for now. Who knows, I may write another one of these covering more finds in the source code. If you're interested, let me know.
]]>On the evening of Thursday 11 July 2019, after a long and latterly very painful struggle, dad passed away in his sleep. We were in the room at Park Hills Nursing Home with him, sitting by the window, and didn't even notice, it was that peaceful.
He was loved by many people, from many walks of life, and he has close friends from so many of his ventures, from farming, haulage, skip hire, chauffeuring, dealings in antiques ("clutter"), running the door at Cruz 101, and most recently from his volunteer work at the charity shop where he enjoyed his time pointing to things with his stick, trying his best to tolerate cheeky customers, and messing up till transactions.
We could write so much here, but whatever we put would not be a match for the memories you undoubtedly have of him. So we'll keep it brief.
Dad, we love you and miss you terribly. Life won't be the same without your sharp wit, your eye for a bargain, and your swearing. And your love.
Donations
In his last weeks, dad was cared for by some amazing people, initially at Dr Kershaw's Hospice and then at Park Hills Nursing Home. We cannot thank the staff at these two places enough - it takes a special kind of person to look after folks like they do.
Instead of flowers, we would please ask you to consider donating to the Dr Kershaw's Hospice charity - you can do this via the Just Giving page we have set up:
]]>These survey results are of particular interest to me in my developer outreach & advocacy role within the SAP Developer Relations team, especially now that I've had just over a year to settle in and find my feet. Another document I'm looking forward to reading is hoopy.io's State of Developer Relations 2019 report, which I hope to get to next week.
Ok, so here goes. I'm picking out particular results as and when they pique my interest, in particular from the Developer Profile and Technology sections. By the way - kudos to the producers of this report in making every section and subsection linkable and referenceable - this is a well put together hypertext document!
I didn't see the survey itself (otherwise I would have probably completed it as well) but I'm curious about how they asked the question that differentiated developer types listed here. I'm guessing it wasn't just a "pick one", partly from the accompanying text for this section, and partly because what jumps out is that "enterprise developer" is separate from other types such as full-stack, back-end, front-end and so on.
I'm an enterprise developer and a back-end developer (and sometimes a front-end developer of course too). I don't see how this distinction adds value, unless it's to show "of the respondents, X% identified as working in the enterprise software context, as well as expressing their actual developer type".
What struck me here was the comment in the accompanying text: "Developers who work with languages such as VBA, F# and Clojure" have the most years of professional coding experience".
This came as quite a surprise - I would have expected to see perhaps Java in this list (some say Java is the new COBOL). I can understand seeing VBA there but certainly not the two functional languages F# and Clojure, which no-one is going to claim are mainstream. That said, they are both wonderful ... I've been exploring them both over the last couple of years - see my other blog Language Ramblings for some posts on that subject.
Note that later on we see also that Clojure and F# are number one and number two in the list of Top Paying Technologies!
I'm not surprised at what these results tell us here, but I am a little disappointed. Being a Classics (Ancient Greek, Latin, Sanskrit, Philology) graduate I guess I must put myself into the "humanities discipline" category which makes up a mere 2.1% of the respondents.
I know I'm in the minority with my degree, but didn't expect the minority to be that small!
The text that accompanies the results in this section starts with "Developers are lifelong learners". That resonates very much with me; when people ask what I do, I often say "I learn". See the section Trying to keep up in my Monday Morning Thoughts post on impostor syndrome for more background on this.
If I were feeling bold, perhaps I'd go so far as to say that if you're not learning, you're not a developer.
The accompanying text here suggests that in the non-developer world, Reddit doesn't even appear in the top ten list. In this list, it's at number one. Why? I'd say because it's the new Usenet which is where a lot of developers discussed low level detail and esoterica about development topics with like-minded individuals. Usenet (and NNTP) isn't really a thing these days, and Reddit has taken over where it left off.
As a developer, I use Reddit to follow nerdy discussions for some of my areas of interest including Mechanical Keyboards, Twitch, Vim, i3wm, ChromeOS and Crostini.
Programming, Scripting, and Markup Languages
(That ugly second comma in the section title is from the original results page, not me!)
It's not a secret that I am a fan of JavaScript as a language, for many reasons - it's available for use in both front-end and back-end development contexts, it's a flexible multi-paradigm language that is evolving nicely, it is accessible and easy to get started with, and (warning, controversial!) the lack of a type system helps rather than hinders. Not in every context, but in many.
So I'm happy (but not surprised) to see JavaScript in first place in the "most popular / most commonly used" list here. It's also held this top spot for seven years in a row.
This gives me confidence to continue on my trajectories (with live streaming, blogging, CodeJams and so on) with JavaScript as a backbone language.
Further results in this section show that JavaScript is also the second most wanted language, just behind Python.
Talking of JavaScript on the back-end, it's also not surprising to see Node.js (backend JavaScript, essentially) at the number one spot on the list of most popular other frameworks, libraries and tools. Not only that, but Node.js is it's certainlalso top of the list of most wanted other frameworks, libraries and tools.
Go JavaScript!
Most Loved, Dreaded, and Wanted Platforms
Talking of most loved and wanted, it tickles me to see that Windows doesn't even make the top ten list of most loved platforms. But it's certainly up there (along with WordPress, Watson, Heroku and Arduino) in the top five most dreaded. I wonder why that is?
Most Popular Development Environments
Top of the list here, from all respondents, is Microsoft's Visual Studio Code. And for good reason, it's a great piece of software that works really well for me and many developers.
I think it's fair to say that over the years, some editors have come in and either stayed or gone out of fashion. Restricting myself to "local" editors and IDEs, ones that come to mind in this context are TextMate, Sublime Text and Atom. Of course, many developers still use those, but the IDE du jour, without a doubt, is Visual Studio Code.
I'm happy to say that SAP have an extension for it, to help developers build apps with the SAP Cloud Application Programming Model. Go to the SAP Development Tools - Cloud page and look for the CDS Language Support for Visual Studio Code extension.
Some of you may know that my long-term love is for Vim, that masterpiece of philosophy, design and implementation that has been around for a very long time. And it's heartening to see Vim at fifth place in this particular list, above Sublime Text, Atom, TextMate, Eclipse and of course above Emacs :-)
One notable entry in the list, even though it's in last place (but it's on the list) is Light Table, a fabulous reimagining of an editor by Chris Grainger, which I learned about in my Clojure adventures. You can see Light Table in action in some of the videos from Misophistful from whom I learnt stuff about Clojure.
That's it for now - perhaps I'll write down some more thoughts about the results in the other sections of the survey. But for now, it's time for a beer (and another look at my Clojure books). Cheerio!
]]>This weekend I turned to a post that was highlighted originally by Fred Verheul: Transducers: Efficient Data Processing Pipelines in JavaScript by Eric Elliott. It turns out that this post is part of a series on "Composing Software", so I turned to the first post - Composing Software: An Introduction as I didn't want to miss anything.
Reading at my leisurely pace, mindful of what Erik Meijer seems to say a lot, which is "... if you stare at this long enough", I didn't get very far into the post before I found something of wonder, and thought I'd share it.
pipe
In talking about the basics of composition, specifically of functions, Eric Elliott talks about utilities that make function composition easier. He mentions pipe
which is available in my favourite functional programming library for JavaScript - Ramda.
He also provides a simple implementation, that looks like this:
const pipe = (...fns) => x => fns.reduce((y, f) => f(y), x);
Let's use the rest of the time on this post to stare at this for a few minutes, as there's some goodness to unpack. First though, let's see how pipe
is used.
Here's a simple example, where we use a predefined function dbl
that doubles a number, and a lambda (anonymous) function that adds 42 to a number. We use these two functions inside of pipe
, which transforms the input (5) in a sort of "pipeline process":
const dbl = x => x * 2
console.log(
pipe(
dbl,
x => x + 42
)(5)
)
//=> 52
In a recent talk I gave at SAP Inside Track Frankfurt - "ES6 JavaScript in the wild" - I took the audience through a number of language features introduced with ES6, a version of ECMAScript (JavaScript) introduced in 2015.
(Photo courtesy of Wim Snoep)
In the definition of pipe
here we can see a few of them in action. Also, in a couple of the episodes of our "Hands-on SAP dev with qmacro" series, we've seen that the reduce
function is a fundamental building block, sort of like the hydrogen of the functional universe. For example map
and filter
can be built with reduce
.
So let's have a closer look at the definition, and see what we can see:
const pipe = (...fns) => x => fns.reduce((y, f) => f(y), x);
const
First, we have the const
declaration, which introduces a constant. My early journey towards functional programming involved starting to think of things that didn't mutate, and declaring values as constants helped me remember that by forcing me to write using values that don't change. In this case it's a function definition, but I use const
equally to define other types of values.
rest parameters
Next, we see the use of the rest parameter syntax (...
), which is a great way of saying, either in a destructuring context or in the context of function parameter declarations, "whatever values haven't been assigned to parameters already, capture them all (the rest, essentially) in an array". So in this case, all the function definitions specified as arguments to a call to pipe
(in this case dbl
and x => x + 42
) are captured into the fns
array.
fat arrows
Then we see our friend the fat arrow (=>
), used to concisely define functions. The conciseness is underlined here in particular, because here, pipe
is being declared as a function that takes some parameters ((...fns)
) and produces a function that takes a single parameter (x
) which produces whatever the fn.reduce
expression evaluates to (we'll look at that next).
Stare at this definition for a minute, perhaps with a sip of nice coffee, and marvel not only at the concise nature, but also at the power that JavaScript puts in your hands as a programmer, in giving you the ability to treat functions as first class citizens: to receive functions as arguments in function calls, and to produce functions as results of function calls.
Functions that receive and / or produce other functions are called higher order functions. This concept is not specific to ES6 nor to JavaScript, but the prevalence of the use of higher order functions has increased in JavaScript with ES6 because the language improvements have made the concept very easy to express.
reduce
Let's finish by looking now at the fns.reduce
expression, noting in passing that another small thing of beauty is the fact that this function that is being produced by the pipe
function has, as its body, a single expression.
The reduce
function is called on the array of functions provided in the call to pipe
(dbl
and x => x + 42
in the example shown). The reduce
function itself takes two parameters - a "reducer" function that is executed for each of the items in the array being reduced over (i.e. for each of the functions), and a starting value.
Here are those two parameters:
const pipe = (...fns) => x => fns.reduce((y, f) => f(y), x);
// -------------- -
// ^ ^
// | |
// reducer function ---------+ |
// starting value ------------------+
The reducer function itself is defined with two parameters: the "accumulator", i.e. the value that has been built up (starting out as the starting value) so far with each reduce iteration, and the "next" item being reduced over (the functions in fns
in this case) one by one.
The body of the reducer function here is again, a single expression, which calls the function in question (as they are iterated through) on the current value of the accumulator.
Focusing only at the reducer function, here are those two parameters and the single expression:
const pipe = (...fns) => x => fns.reduce((y, f) => f(y), x);
// - - ----
// ^ ^ ^
// | | |
// accumulator ---------+ | |
// next item ------------+ |
// reducer function body -------------------+
So with all this in mind, can we imagine how the whole thing works, with the invocation example we saw earlier?
const dbl = x => x * 2
console.log(
pipe(
dbl,
x => x + 42
)(5)
)
Let's try.
The call to pipe
is made specifying two function definitions dbl
and x => x + 42
. This produces a function that has captured (closed over - forming a closure) those two function definitions, and is expecting a single value to be received in x
. Once that value is received (the value is 5 in this case), the function x => fns.reduce((y, f) => f(y), x)
can be evaluated, which we can visualise like this:
Function invocation (y, f) => f(y) Value
(starting value) 5
(5, dbl) => dbl(5) 10
(10, x => x + 42) => (x => x + 42)(10) 52
Given that reduce
sensibly returns the final value (i.e. the result of the final expression in the iteration loop) which is 52, we're good.
I do find it's useful to take one's time staring at stuff until the mist clears. I hope this post helps you when staring at things like this. Happy functional adventuring!
]]>The availability of Linux on my OS of choice (Chrome OS), in the form of Crostini, and the immediacy of the Linux terminal, where I feel most at home, has given me the chance I've been looking for to properly learn stuff I've only scraped by with in the past, and to mix old & new techniques. I'm blogging using GitHub Pages which means Markdown and Jekyll, synchronising content between my local machine(s) and the cloud using git repositories. I'm using Vim to write, and am especially enjoying some of the plugins I'm trying out, in particular this zen-like writing mode that the lovely combination of Goyo and Limelight offers.
I've started rebuilding my Vim setup from scratch, based on the work of some great folks out there, including the author of many Vim plugins Tim Pope and someone with a great setup and approach, Luke Smith. I've started to share my Vim setup in my dotvim repository on Github.
I'll talk about the contents of that repository in another post sometime ... for now, I wanted to mention a script I wrote to help me quickly start writing posts. It's called newpost.js
and lives in a scripts
folder in my $PATH
. I can invoke it like this:
> newpost.js Vim, Markdown and writing
and it will create a new file with the right name:
2018-12-24-vim-markdown-and-writing.markdown
in the right place (the _posts/
directory of the local version of my blog repository), containing basic frontmatter that looks like this:
---
layout: post
title: Vim, Markdown and writing
---
It will then open up the file in Vim so that I can start writing immediately.
I can imagine further refinements to this script, but I realised I wouldn't be able to get to any refinements until I created a first version and started using it. So I did. I've shared the script in a new scripts repository.
It's working well for me so far, but I want to explore further the relationship between a Node.js based script and the underlying shell environment. I'm already spawning a Vim process to edit the file, directly from the Node.js process:
cp.spawn('vim', [
'-c',
'+normal G',
fullname
], {
stdio: 'inherit'
})
Perhaps next will be some interaction via environmental variables. We'll see!
]]>We needed a name for our shared co-located host gnu.pipetree.com and decided, literally, to think of two random words and put them together. And the name has stuck ever since. Unfortunately I've had all sorts of issues with Network Solutions, where the domain name is managed ... so much so that I've pretty much given up getting them to allow me to update my records. Piers had moved on to his own domain names a good while ago, so it has been just me using pipetree.com for a long while now.
I've been thinking a lot about 2019 and decided to clear the decks a little, to be somewhat more organised and to make a clean break. So I'm planning to retire pipetree.com over the next few months, and have moved to a new domain qmacro.org where you're reading this now. I've embraced the GitHub Pages approach to blogging, continuing on with using Markdown, something I started with Ghost (it was one of the main attractions of the platform).
My existing pipetree.com platform is served from a Linode-based virtual private server; I've been very happy indeed with Linode, but my own management of the server has lagged a little, and while I used to run various services on it, including the Ghost installation for my blog at pipetree.com/qmacro/blog
, there's not much need for it any more, and it's become a little bit disorganised.
So I have migrated all my old posts to this new place, using Jekyll (as the basis for GitHub Pages) and I've made my first steps with the Liquid templating language to build my new homepage. I know that I'm breaking all my old URLs, but I feel it's the right move to make at this stage. I'm now managing my domains with Google Domains which is super simple, and a far better experience than Network Solutions. And I have a domain name that more reflects me and my identity on the Web.
I am of course still blogging over on the SAP Community and will continue to do but my personal space here will continue to serve in the wider context, as it always has done.
So there you are. A new start and new domain name ready for 2019, even running over HTTPS, through the power of Let's Encrypt. In the meantime, happy holidays!
]]>Google is rethinking the idea of the URL
I saw a few tweets this week, including this one from Jon Udell which pointed me to a tweet and article on Wired that talked about Google rethinking the idea of the URL. On the one hand, the concept of URLs doesn't belong exclusively for Google to do with as it pleases ... on the other hand, it's not for me to say what they can and can't think about.
Anyway, I read the article -- "Google want to kill the URL", by Emily Waite -- on Wired. At least, I think it was on Wired, I wasn't sure because Chrome was deliberately obscuring the URL in the address bar.
I thought it was worth sharing the thoughts that occurred to me as I read through it, so here they are, in context. I'd encourage you to read the article too, so you can come to your own conclusions. My thoughts are just that - thoughts, opinions, based upon nothing much except what I've read, mind you. Note also that some sections that I quote are directly from the article's author, others are quotes from other people interviewed for the article.
Quotes and comments
"...as Chrome looks ahead to its next 10 years, the team is mulling its most controversial initiative yet: fundamentally rethinking URLs across the web." --- There's definitely a very good chance it will be controversial - look at the "origin chip" idea that surfaced a few years ago; from what I surmise, the "origin chip" idea is nothing compared to the size of the rethink I'm getting a feel for.
"...Uniform Resource Locators are the familiar web addresses you use every day. They are listed in the web's DNS address book and direct browsers to the right Internet Protocol addresses that identify and differentiate web servers." --- Nope, not quite. The hostname part of the URL is what that is, not the URL itself. I'm not sure whether I should be relieved or worried at this inaccuracy. Relieved because it suggests the article isn't entirely based on solid research, or worried because of the dangerous conflation of two distinct things: fully qualified hostnames ("wired.com" is the example given in the article) and URLs. Dangerous, because it only serves to add fuel to the "rethink" fire.
"In short, you navigate to WIRED.com to read WIRED so you don't have to manage complicated routing protocols and strings of numbers." --- Again with this conflation. Stop it, please.
"As web functionality has expanded, URLs have increasingly become unintelligible strings of gibberish" --- On what basis is this statement made? I've been around the Web from the beginning, and use it daily. This statement is nonsense.
"[URLs] combining components from third-parties" --- What? This makes no sense whatsoever. Is it just a misunderstanding, or a deliberate attempt to inject a vague notion of unease in the reader?
"And on mobile devices there isn't room to display much of a URL at all." --- This is being presented as a problem, even though earlier in the same paragraph the problem was that they were too hard to read anyway. How does that logic work?
"it's difficult for web users to keep track of who they're dealing with" --- This is not related to the length or complexity of URLs, it's mainly related to the establishment of the server origin.
"itās hard to know which part of [URLs] is supposed to be trusted" --- Most people I know don't find it hard, and isn't this what the various secure symbols are for?
"in general I donāt think URLs are working as a good way to convey site identity." --- Of course, that's an opinion, so we must read that as "I don't think URLs are working as a good way ... for me" (the quotee, Adrienne Porter Felt). It's not my opinion - I think the opposite.
"So we want to move toward a place where web identity is understandable by everyoneāthey know who theyāre talking to when theyāre using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs." --- If that how and when is different to the current state (i.e. "always", right now), then the situation will be worse, not better.
"the problem doesn't have an easy answer" --- Could it be because it's not actually a problem, and therefore an answer doesn't make any sense here?
"even the Chrome team itself is still divided on the best solution to propose" --- This tells us a lot about what's going on here, perhaps.
"I donāt know what this will look like, because itās an active discussion in the team right now" --- That's good, and I hope that discussion is open and remains open.
"Thatās one of the challenges with a really old and open and sprawling platform." --- Ah! There we go. I was wondering when it would appear. FUD in all its glory.
"everyone is unsatisfied by URLs" --- Nope. Wrong.
"They kind of suck" --- Nope, they don't.
"Google paused the origin chip rollout" --- I wonder why? Could it be because there was strong feedback that it was a bad idea?
"the team faced a lot of pushback for its HTTPS web encryption initiative" --- Too right! I don't always agree with Dave Winer, but on this topic, he makes a lot of sense.
"But you make a change and people freak out. " --- Err, yep - of course they do, if it's not a change that's fully thought through.
"community scrutiny of any proposal Google puts forth will be crucial" --- This is super important. I know this is just the beginning, and the engineers are talking about ideas, which is more than fine. I hope for everyone's voice to be heard, and that everyone concerned expresses their opinion.
On that point, I've tried to make a start here.
]]>I've been very honoured to be a guest on SAP CodeTalk a few times, and I thought it would be worth listing those sessions here.
March 2014
Paying IT Forward ... IT Does Compute
November 2014
April 2016
April 2018
SAP Cloud Platform Workflow Service
May 2018
Aug 2018
Oct 2018
Jan 2019
Mar 2019
]]>There are some amazing online tools that allow online multi-user realtime editing, commenting, suggestions, revision management, and so on, and these work very well indeed.
Unfortunately, Office 365, Sharepoint and Teams are not tools that fit into this category and are detrimental to productivity and collaboration.
Thanks.
]]>"I've just bought a Chromebook. How do I use Word on it?"
After my initial, and usual phone-biting reaction, I read a reply that went along these lines:
"Soon you'll be able to run Linux apps, so perhaps you could run Libre Office"
Apart from that reply instilling a similar "good grief" response, it got me thinking. I have for a while felt slightly disappointed in the explosion of 3-in-1 Chromebook devices, with touchscreens, 180 degree hinge flip capabilities, and the ability to run Android apps. Now there's the prospect of running native Linux apps too.
Now I don't want to appear as a stick in the mud, but this is a little sad. I love the simplicity of ChromeOS as it is, a great browser on a fast device with little else: a secure shell facility for connecting to remote systems, and a simple file manager for when you absolutely need to give the cloud a leg-up by downloading and uploading files from one service to another. I grew up with the keyboard + screen combination; the keyboard shortcuts in ChromeOS are great, which means I use the trackpad on my Pixelbook less than I otherwise would. And I found a software switch that allows me to turn the touchscreen off. I hardly ever want or need a touchscreen. I just want my web terminal.
I don't want another Linux style OS - if I do, I'd use Linux. And those that know me know that I certainly don't want any Windows type OS - in fact, I don't allow any form of Windows operating system in the house, my son runs macOS and my two nephews' machines have Ubuntu Linux running on them.
I know I'm probably in the minority, but with the prospect of Linux apps coming to ChromeOS, I hope the project team manages to keep that sense of simplicity - and speed - that made ChromeOS so appealing in the first place.
]]>In the years since becoming an SAP Mentor lots of things have happened, and I'm grateful for the opportunities that have presented themselves to me ... as well as the Mentor shirts that take up a decent space in my wardrobe and that I proudly wear at events inside and outside the SAP developer ecosphere.
There's an end to pretty much everything though, and I've been planning to retire to SAP Mentor Alumnus status for a while. There are many reasons for this, but the main one is to make way for new blood. I can and will continue to be active in the community - becoming an SAP Mentor Alumnus doesn't mean that changes.
My move to SAP Mentor Alumnus status also coincides with a couple of other events; of course, there's my recent move to join the SAP family, but there's also the end of the term of membership of the SAP Mentors Advisory Board (MAB). So I think the timing is right.
Thank you to my fellow SAP Mentors and the programme team, and also to my esteemed co-members of the MAB. I don't see this as a big change - I'm just moving from one group of folks I respect to another.
]]>I have been involved in building and helping the SAP community (small 'c') grow for a long time, from mailing lists in the 1990's, through co-creating the original SAP Developer Network and seeing the changes through the SAP Community Network to become simply the SAP Community (see The SAP developer community 10 years ago, a post from 2005).
It's been great to see Developer Relations and the SAP Community moving under the wing of the office of the CTO, and with the backing of CEO Bill McDermott, there's certainly more than enough torque and momentum upon which to build.
The chassis has undergone some significant welding in recent years, but the current remodelling, while still needing some love and attention, is so much better for content creators. And without content, there is no community. I'm hopeful that the chassis and bodywork will go from strength to strength, especially with the recently announced 2018 redesigns, and we see the rebirth of interconnectivity.
What struck me most about listening to Bjƶrn and Thomas on the video just now was my perception of the sense of community being the strong, implicit anchor for the message and the vision. So I decided to transcribe the video, to perform a simple text analysis.
The transcription is here, and we can see from some simple textalyser analysis that the perception wasn't too far off - the top five places for word frequencies in the entire piece are (with occurrences in brackets):
1: "our" (12) 2: "community" (10) 3: "sap" (6) 4: "how", "new", "customers" (5) 5: "need", "think", "help", "content" (4)
Now it's a short dialogue so perhaps this analysis needs to be taken with a pinch of salt, but it certainly occurs to me that the core message, and the core task, is getting people and knowledge connected.
We can do that, can't we?
Update: I've since been encouraged to re-post this on the SAP Community site itself. So I've done so, here: Thoughts on what's next for the SAP Community.
]]>This privacy policy sets out how the scripts use and protects any information that you supply when using them.
We are committed to ensuring that your privacy is protected. We ask for no information, nor is private information required to operate the scripts. This policy is effective from 01 Feb 2018.
What we collect
Nothing.
What we do with the information we gather
We do not gather or keep any information.
Security
We are committed to ensuring that your information is secure. To this end, the scripts runs on the secure Google platform, specifically via the Apps Script services, and data is transferred via secure HTTP.
]]>It turned out that the project was Esso's first implementation of SAP, in the form of R/2 release 4.1D. A short time later I joined the project and entered the (early) universe of SAP technology. This was before ABAP came along, which was actually fortunate, as I got to learn and use IBM's 370 assembler, in which R/2 was written. The design of the architecture, and the codebase underlying the application modules in R/2 was both fascinating and beautiful. (Incidentally it was from within the R/2 assembler framework that I took my online nickname "qmacro"). Combining that with the wonders of all things IBM mainframe (the MVS operating system with tools such as TSO, ISPF and JES2, IMS DB/DC, and of course JCL) and I was hooked.
Fast forward just over three decades. In that time, I've stayed with SAP technologies in roles as diverse as Basis Technician, Application Developer, Integration Expert, Trainer, Backend Programmer, Frontend Programmer, Consultant, Mentor, Troublecauser and Tea Maker, at end customers large and small, partners large and small, sometimes as a contractor and sometimes as a permanent employee. I've also spent a couple of periods at SAP as an external colleague, first in the 1990's on the IS-Oil project, and more recently on the UI5 team.
My interest and passion for SAP technologies has only grown since those early days where my workstation was an IBM 3278 terminal.
SAP technology has changed, some technical directions have come and gone, others have stayed and gotten stronger. One lovely development which continues to grow is SAP's embrace of open source and open protocols, something that's close to my heart.
But what's remained most prominent across the years is the human aspect - the people at SAP are what have made a difference. I remember back in the late 1980's talking directly to the developers of the R/2 batch programs (most memorably in the RA - asset management - module) and exchanging ideas. I also remember site visits from heroes of mine such as Dr Alfred Klar who was at the heart of everything that SAP relied on in IMS terms.
Today is actually no different, which is wonderful. Just yesterday I was corresponding with core developers from the SAP Cloud Platform Workflow service team on nuances of the implications of an HTTP 302 response on POST requests. And last week I was enjoying a couple of craft beers with some of my all-time heroes from the UI5 team in Mannheim.
So with all that in mind, the next and best logical step for me in my career is to become part of the family.
I'm really happy and very proud to say that I'll be joining SAP's Developer Relations team next month, reporting to Thomas Grassl and working alongside some fabulous people. What's more, the recent announcement of the move of Thomas's team, along with the SAP Community, which I helped give birth to all those years ago, to the CTO organisation under Bjƶrn Goerke, makes things extra special.
Finally, it seems fitting to title this post "Coming Home", as I think that's a good description of what I'm doing.
]]>I've written a series of blog posts in my (old) space on the SAP Community; here's a quick list of them for reference. While they're hopefully digestible individually, they sort of follow a logical sequence, so if you have the time and inclination, you might want to read them in order.
Part 1: The Monitor - notes on the workflow monitor app that is part of the SAP Cloud Platform Workflow service.
Part 2: Instance Initiation - an exploration of the part of the SCP Workflow API that deals with workflow instances, looking at how we initiate a new workflow instance, and paying particular attention to how we request, and then use, a cross site request forgery (XSRF) token.
Part 3: Using Postman - an explanation of how I use Postman to explore the Workflow API, making the most of some of Postmanās great features.
Part 4: Service Proxy - the presentation of a small proxy service I wrote to handle the minutiae of initiating a new workflow instance.
Part 5: Workflow Definition - a look at the simple (beer recommendation) scenario I came up with to trial a workflow definition, and that workflow definition itself.
Part 6: User Tasks - an examination of user tasks within the wider context of workflow definitions, along with task UIs and how they fit into the context of the My Inbox app.
Part 7: Component Startup - an investigation into how a task UI starts up, where it gets the right data to display, and how it interacts with the My Inbox "host" app.
Part 8: Recommendation UI - a look at the specific task UI I wrote for the beer recommendation workflow.
Part 9: Script Tasks - a look at what they are, and how you can use them to manipulate the context of a workflow from within a running instance.
Part 10: Service Tasks - a brief excursion into calling other services from within a workflow, using the beer recommendation workflow scenario as an example.
]]>This week saw the announcement of a brand new pricing approach and website for the SAP Cloud Platform (SCP), which, judging by the reaction, was a very welcome piece of news. To many, the key change is the introduction of a consumption-based pricing model, as an alternative to the existing subscription-based model. Not only that, but the website offers a pricing estimation calculator which, if I've done things right, shows that consumption-based costs for the small project I described in my earlier post are not unreasonable.
It's still early days but the situation looks much better. As well as the cost, which I'll get to shortly, there are improvements in the two areas that were causing concern last time I looked - clarity and flexibility.
The very fact that the consumption-based model allows you to pick what services you want, without having to perform mental gymnastics while looking at a complex PDF document (which still exists for the subscription-based model), is a big plus. There's also a guided section which shows which services are additionally required, if any, and lets you add those to the estimate. For example, the use of the Workflow service requires the Portal and Web IDE services. The fact that you can turn the dial up and down on units (number of users, site visits, etc) and see the estimate change accordingly, is great.
Moreover, as you can see from the estimate I quickly put together just now, to reflect Workflow and Business Rules services, the monthly cost is not scary. Of course, it could always be lower, but in the context of a real project and subsequent productive use, the fee is minor.
I've just scratched the surface here, and will be digging to this a little more over the coming weeks. There are few things I'd like to see already. For example, the estimate calculator itself is a little slow to load (I've just gone back to it and only have a busy spinner right now). And I saw somewhere that Business Rules is included when you choose the Workflow service, but I don't see this confirmed in the estimate summary.
Overall, I think this is a very good step in the right direction and gives SCP more than a fighting chance to compete. It's hard to consider a service if the usage costs are opaque and / or prohibitive. This new pricing announcement heralds changes that can only be seen as positive.
]]>If there's one thing that's constant in IT and consulting, it's change. Yes, that's a clichƩ, but it's definitely true here. And unless you're happy becoming eventually obsolete, you need to keep learning. It doesn't come easy - days are busy, coffee breaks are a welcome relief where you can let your brain coast and process thoughts in the background, and in the evenings you're tired.
So plan in time for yourself to read. Have a focus on a certain topic or area, and make it your plan to work through all the material you can find. Set aside some "time for me" - it's not indulgent, it's a key aspect of being a good consultant. I'm an early riser, so after a run, I give myself 30 mins each morning to catch up on articles I've bookmarked.
Learning isn't just about reading of course, there's putting knowledge into practice too, and you have to make time for that as well. That might be at weekends if you find yourself with a bit of time on your hands, but it might just be as well while commuting, or an hour before the "main" day starts, say 0800-0900. Don't feel guilty - go for it. Nobody else is going to carve out the time for you.
(You may be interested to read this post I wrote about reading and learning: Tech Skills Chat with JonERP ā A Follow-on Story).
There are so many articles on this subject I don't need to go into detail here. But I wanted to share a few of things I do in this regard that work well for me.
First, though - let's talk about attitude. Resist the temptation to be influenced by people who expect you to respond to an email minutes after they've sent it. Email is not work. Work is work. Email is communication - and asynchronous communication at that. Don't let them dictate your activities, and don't treat your email like a to-do list, because it won't be your to-do items on there!
So, here are the things I do that I recommend you do too. First, turn off all email notifications and alerts on all your devices. They only serve to distract you from what you're trying to do.
Second, discipline yourself to process email a few times a day. Don't have the email client running at any other time. This may be hard to do at first, especially if you're relying on your email client to remind you of meetings and events. But if you turn off notifications and minimise the client, that may be good enough. A side effect of applying this discipline is that it will eventually teach your colleagues that discipline too, at least in terms of expectations. And you can always send them (a link to) a polite note like this one: Email Discipline.
Finally, build a rule to handle your incoming email, splitting it on whether you've been directly addressed (in the "To:" list) or not (in the "Cc:" list). Divert to a "CC-Inbox" folder those emails where you've only been CC'd, and only check this folder once every two days or so.
If folks ask you do to something and they've not directly addressed you, that's bordering on rude. Resist the temptation to do it, and if they chase you on something, you can send them (a link to) a polite explanation like this one: Addressing Emails.
Managing email is not only about managing yourself, but about managing others.
There are as many articles on meetings as there are on emails, so I can be brief here too. Time is the most precious commodity. Some meetings are necessary, but they're the minority, especially if you're technical and have work to do. Here's what I do. Not everything all of the time, but when it feels appropriate.
If a meeting is longer than half an hour, ask if it can be shorter. Resist accepting meeting requests that are over an hour, or reply tentatively saying you can make the first hour. Meetings that have the "luxury" of more than 60 minutes tend to squander those minutes and be almost naturally more inefficient.
Don't accept meeting requests that lack information (such as dial in details, or an agenda). You can send the requester (a link to) a polite note like this one: Meeting Request Details.
If I'm working at a client, I allow myself up to a maximum of two 30 minute calls. I can absorb this time into an earlier start, a later finish, or some of my lunch, depending on how generous or hungry I'm feeling. If I'm working at a client and they're being billed for it, it's not appropriate to use that time for other work. You shouldn't do it, and your colleagues shouldn't expect you to either.
Finally, don't waste your time if a meeting doesn't start on time. I usually wait for up to 5 minutes into the call (sometimes 10 minutes if I'm feeling generous and can work on stuff while I wait) and if it hasn't started, I'll leave the call.
I try to step away from the keyboard at lunch. It doesn't work all the time, and sometimes it's because I am putting some of my learning into practice. But your brain needs time to process what it's been working on during the morning, and it's not going to be able to to if you're still in front of the screen.
Stepping away is a discipline I learned from practising the Pomodoro technique (see the post The Maker's Schedule, Restraint and Flow), and I do find it helps to re-find focus, even if you're not thinking about it explicitly.
Also, lunch time isn't fixed. My day starts early, which means lunch for me is around 1130. That's great, because when I get back from my lunch, others are just going, which means a little more peace than the rest of the day :-)
If I'm honest, it's taken me a while to come to this conclusion. I've been fortunate recently to move into a position where I can look after a team of folks who are all amazing, technically and otherwise. And through my career hacking on SAP technology, it's been the people that matter.
I try to think what I can do to help them. Not all the time, I'm not that saintly. But when I can, when I'm mindful of what's important, I try to make time. I've been in this business for three decades now, and it stands to reason that I probably have some wisdom to share, even if it's "don't do that, I did, and it's not good".
There's this idea of being a 10x developer. I'm not sure if this is just mythical or metaphorical, but there's a simple truth in there which is that a good way for one person to scale is by making other folks better. And that's what I'm trying to learn to do now.
]]>Roadmap available
I started to dig in to the Business Rules services this month, and liked what I saw. But there were things that were missing, in my opinion, so I've been eagerly awaiting an update to the roadmap. And this week we got one - Business Rules has its own roadmap:
SAP Product Roadmap - SAP Cloud Platform Business Rules (dated 22 Aug 2017)
Observations
It's been updated to reflect the next few quarters. I read through it this morning, and have some observations on what I read, which I wanted to share:
SAP intends to establish a common Enterprise Rule Model, an abstraction for design time and runtime across the different platforms today (their SaaS offerings as well as classic ABAP stack based systems and S/4HANA). And the focus for this model is clearly on SCP.
The Business Rules service is the next step in that not only are rule sets extracted, but also the execution, in the form of the runtime(s), and access to that execution, is available as a set of API-based services in the cloud. This does indeed lead to agility, business empowerment, legacy preservation and cost savings, as well as readability and reasonability.
SAP Leonardo needs a posse. Machine Learning and Internet of Things is all very well, but without a set of core services to do something with the intelligence and the data, we're not going to get very far. The Business Rules service seems an ideal candidate for mixing into the strategy here.
Deprecation is a fact of software life, and we see it here with the HANA based decision tables and rules framework (HRF), in favour of the (to-be) all-encompassing Business Rules service. It's a bold move, but the right one if we're to reach any sort of standardised business rule authoring, storage and processing across the wider SAP ecosystem.
Still missing right now is some sort of transport mechanism. Right now, even though the product is GA, there's no way I can see to manage the design time artifacts and transport them through DEV/TST/PRD tiers. There's the ability to manage rules from an active/inactive perspective in the repository, but that's still only within one subaccount. I even looked at the network calls behind the scenes to see what would be needed to build a DIY rule set extractor. But it was pretty complex and I wanted to go out for a run, so I shelved that idea :-)
So in Q3/2017 there are plans for a "REST API for SAP Enterprise Rule Model". I am interpreting that as what I'm looking for: to be able to manage the lifecycle and transport of artifacts across the landscape. Here's hoping!
Final thoughts
When I first came across the Business Rules service, I did wonder in some respects what purpose an extracted form of logic processing would serve. But on reflection, it's clear. Along with managing workflow (lowercase "w"), managing decisions which belong in the business is a key cornerstone of any successful organisation. It's early days for the service, and along with the missing transport mechanisms the UI is still a work in progress, I think, but it's definitely good enough for now, and I'm keeping an eye on things for sure.
]]>Recently I've been pleasantly encouraged by new services such as Workflow and Business Rules becoming available on the (free) trial cloud platform, which is great. I get to learn about the features and try things out. Encouraged by what I find, even in these services' early days, I decide to inform myself of how much an organisation would have to pay for these services.
So I follow the links and end up in the SAP Cloud Platform Pricing & Packaging area, which is a little bit too high level for what I'm looking for. Fortunately, I think, there's a link to a (PDF) pricing document (the URL for this particular resource suggests a date of May 2017).
(Update 11 Aug: SAP has removed that May 2017 PDF and replaced it with a new one with a resource name containing something that looks almost but not quite like a date, so I can't tell what it is: SAP-Cloud-Platform-Pricing-82017)
(Update 31 Aug: Seems this new pricing PDF has disappeared, and also the link from the overview pricing page has been removed)
(Update 01 Sep: Another new pricing PDF appeared today! SAP-CP-Pricing-September-2017)
Let's say I'm interested in putting together a small project using the Workflow service and also the Business Rules service, for a small department - about 25 users. I'm going to leave out any thoughts of storage and DB processing for now, and focus on the business services, but will want to deploy some integration scenarios to interface with my existing systems.
Here are my observations when trying to figure out what it will cost.
The main pricing page is not much use, because it doesn't tell me whether the Workflow service and the Business Rules service are included in any of the packages. I do note, however, that the "for medium business" package versions are perhaps what I'm looking for, as they're "perfect for midsized businesses or departments". While looking at the costs for these medium business packages, I see that the "multiple application edition" is "now" ā¬59 / user / month, not much more than the "single application edition" at ā¬39 / user / month. Has there been a price drop? Let's see.
I start reading the pricing document PDF and see that instead of ā¬59 / user / month, the "multiple application edition" is twice the price, at ā¬118 / user / month. I guess (that's all I can do) that the "now" on the website does denote a price drop. Good. A 100% jump to go from one application to two is a little steep.
So what about the specifics? The high level info is OK, but I need detail. I jump to the "Table 1" which gives me some more info. The "Enterprise Package" (sic) pricing looks much more than I'd like to consider shelling out for my department (starting at ā¬1,500/month, through to ā¬15,000/month), so I stay with the "Medium Business Packages" and see that there's a minimum number of users (10). That's OK, I have 25.
In examining the first part of Table 1, I notice that it's only the "multiple application edition" that will allow me to partake of the Integration services. So I'm already forced to go down that route (even for a single application). Hmm. Even if I jump back into the Enterprise Package options, I can't take the ā¬1,500/month route, I have to start with ā¬4,000/month "professional edition" to get Integration.
As an aside, I had been very interested in the API Management service, but as it stands with pricing right now, the only option I have to use that service is to go for broke and shell out ā¬15,000/month for the Enterprise Package "premium edition". No thanks.
Time to turn my attention to the Workflow and Business Rules services, in which I'm mostly interested. I scroll down Table 1 and don't find them anywhere. All I see is "Add a-la-carte-Resources", so I jump further down the document to "A-la-carte Services" (sic) and find the Workflow service, thus: "Monthly Tiered Fee, Min. 100 users, ā¬1.75-ā¬3/user/month". Ouch. I have to pay monthly for 100 users, even though I have only 25? And what's the difference between ā¬1.75 and ā¬3? It doesn't say.
Let's look for the Business Rules service. It's pretty new on the scene, and ... yes, as I thought - pricing is lagging behind. No mention of this service at all in either the pricing overview page or in this detailed pricing document. Oh dear.
To put the final nail in the coffin for this coffee time endeavour, I scroll back up, trying (in vain) to see if I've missed anything on the Business Rules service. No, I haven't (tho it's not GA yet). But what I do notice is that the a-la-carte options ... aren't even a possibility for customers going for per-user pricing: Neither of the Medium Business Packages offer the possibility of adding a-la-carte services. So I have to abandon my plans to go for a Medium Business Package route and consider the Enterprise Package route instead. That's crazy.
The SAP Cloud Platform is growing, going from strength to strength. And we're growing with it, businesses and consultants alike. But SAP isn't doing themselves any favours, by not making it easy to commit to championing these platform services and including them in demos and proof of concept solutions. Without clear, transparent, simple and reasonable pricing, it seems as though SAP are leaning against the door that we're trying to lever open.
]]>Step 1: Make sure your bookmark bar is visible
See the help article How to use the bookmarks bar for more details on this.
Step 2: Drag this link to the bookmark bar
It's called a "bookmarklet" and is a bit of JavaScript code.
Step 3: Click the bookmarklet when you're in the Timesheet
Whenever you start up the timesheet system, click the bookmarklet once you're in ... and breath a sigh of relief.
]]>This can easily be described in the meeting request itself, allowing the invited participants to prepare properly and share a common purpose with the others.
A meeting request with nothing in the description is not a meeting request, it's just a request for time, which is the most precious commodity. We can fix this by ensuring there's something in the body covering the topic and those outcomes.
Let's do that.
]]>Roll on a few weeks and I start to hear about Actions on Google, API.AI and the Google Assistant infrastructure. Cut to today, and I'm so enamoured with how the platform is panning out that I've already bought a Google Home device and I'm trying out my own test actions already[^n].
I'm a big fan of the Google Apps platform, in particular Apps Script. The combination of server-side JavaScript with the rich access to the Apps platform and data makes it very easy to build and deliver very useful services. I put together the SheetAsJSON service back in 2013 and I, along with others, still use it today.
So it wasn't unusual for me to think of Apps and Apps Script as a natural set of tools in building out some Actions on Google functionality. I had watched the recording of the excellent session Extending the Google Assistant with Actions on Google (Google Cloud Next '17) with Guillaume Laforge and Brad Abrams and thought that their example action - a conference helper to assist with discovering topics and sessions - was not only useful, but also ideal for taking my learning to the next level. I studied the content carefully and came up with my own version. Theirs was using an API endpoint that looked like this: http://cloudnext.withgoogle.com/api/v1/...
, and is represented by the "Next" box in this slide (from the session):
If I was to build my own version, I'd have to come up with a service of my own. This is where the Apps platform and Apps Script came in. First, conference session data lends itself to being marshalled into rows and columns, and at least for me, seeing data in front of me in a structured form really helps. So grabbing the data for a conference and putting it into a spreadsheet was the logical first step. But it only got better from there.
Spreadsheets are about storing and managing data, but that data and management is dynamic. Having calculated values is a completely natural thing, and the spreadsheet model of values dependent on other values is powerful, especially when you want to manipulate the data, say, for testing and discovery purposes. Moreover, for developing the natural language that you want for your action's persona, it's a good way of setting up data circumstances that warrant a particular figure of speech or turn of phrase in the response you're building.
So I stored the conference data for my version of the helper in a Google spreadsheet, enhanced it with some calculated values, and then wrote some simple Apps Script to provide an API to that data set. So that combination became the equivalent to the "Next" box in the slide shown earlier.
Here's a demo of my helper in action, including a look at the spreadsheet and how the data is surfaced in speech:
If you're curious to see what the Apps Script based API produces, here's an example from the call to retrieve the topics (ie the one called to be able to fulfil the 'list-topics' intent):
Note that the topics (Data Science, Security, Chrome OS and so on) are returned in a map, where the properties are the topics and the values are the lists of sessions for each of those topics. The data thus retrieved is stored in the relevant context, so that once the user has heard about the topics available and wants to explore the related sessions, the data is available immediately without a further call needed to the service API.
Anyway, I'll leave it there for now. The writing of this post was spurred on by Eric Koleda who asked me to share a demo - thanks for the prompting, Eric!
If there's interest, there's a lot to talk about in future posts. Some topics that come to mind are:
[^n]: One particularly irksome issue right now is that to get actions to work, I have to switch the Home device to US English, which just doesn't feel right ... and the locale-related changes that come with that switch mean that temperatures that are consequently given in Farenheit don't mean anything to me ;-)
]]>I discovered Ghost a while back, and am using it to host another blog of mine - Language Ramblings. Its simplicity reminds me a lot of Blosxom, plus there's my composition language of choice built in - Markdown. So before embarking on some new blogging, I thought I'd change over from WordPress to Ghost, which, as you can see, I've done.
I've exported the content from my old platform and imported it here. I'm currently running the instance on a different port, locally on my server, and reverse-proxying the /
path from my Apache install to it. There will be a few rough edges over the next few weeks, where I need to sort out relative paths for a lot of the URLs, so please bear with me. But I feel the effort will be worthwhile. The lack of friction I feel with the Ghost platform is definitely something I'm embracing.
Best practices in any technical endeavour apply to each and every stage and building a mobile reporting solution is no exception. Here, in Bluefin's Mobility, User Experience and Development Centre of Excellence (MUD CoE for short!) we find it useful to align our thinking with the flow mantra that SAP have popularised: Discover, Design, Develop, Deliver. (There's also a fifth "D", but you can read about that in another post: Debugging SAP Fiori apps - the fifth "D".)
Here are some best practices for you to consider when contemplating a mobile reporting solution, organised by the stages in that flow. Whether it's adopting a pre-built solution or rolling your own, these principles will keep you on the right track.
If you're going to make big decisions, make them up front, in the Discover phase. And if you're going to change your mind, it's least costly to do it at this stage. You haven't committed yet and therefore have the ultimate luxuries in decision making - the most time and the least pressure.
Back in the late 1980's I was on an SAP system migration project (from R/2 to R/3) and a business analyst friend related to me some of the results of the "reporting requirements analysis" he'd carried out with the users. For one of the reports that had been designated "must have", he had interviewed the report recipient to find out more about what they did with it. The response: "I receive the report". When questioned what they did after that, he got: "I put it in the bin".
The Discover phase is where you should start asking the hard questions. In the case of building a mobile reporting solution, those hard questions should be designed to qualify the mobile approach in or out. There's no point in doing it further down the line. Once the personas have been established, ask them: What do they actually need to do? ("Look at the report while mobile" is not an adequate response). Why do they need to do it on a mobile device? What manipulations do they expect to be able to perform?
Some data visualisations work well on small form factors, some don't. What's possible, and what you should attempt, are often two very different things. Understand the reasons behind the requirements before moving on to Design.
Building solutions that work effectively offline is hard. Don't let anyone tell you differently. Yes, you can cache data on the client, but ensure you have calculated the cost-benefit ratio. How old can you allow the data to be? How much processing power does the client need to have to be able to perform aggregations locally? Does building data manipulation into the client restrict the choice of target device? Will you need to deliver and maintain separate OS-native versions of your reporting solution, or will you be able to embrace the Web-native philosophy and use the power of your backend systems into which you've poured a ton of enterprise budget as well as ton of enterprise data?
Finding the right balance between realtime and stale data, and juxtaposing that with the balance of client performance and flexibility is something you need to achieve. Make sure you do it, and do it up front.
Discovery morphs eventually into design, which is equally as important. Once you have the outline solution, you need to refine it so that not only will it be useful, but deliverable too.
Leading a horse to water and making it drink are two separate things. So are making a reporting solution available and getting the users to engage with the data. Once you've identified the solution approach at a high level, you need to design it to be as engaging as possible. It's time to mention User Experience (UX), and in particular, to consider what aspects are required to make good UX a reality in the case of a mobile reporting solution.
Time-to-Insight is a term I've just made up, but it expresses one of the key aspects - and that is friction. Or, rather, the lack of it. How many clicks through a User Interface (UI) do my users need to get to the data they want to see, to find the insight that's waiting for them to discover?
Remove friction by implementing Single Sign On (SSO) and allowing them to personalise their reporting preferences, either explicitly or implicitly (by observing and learning from behaviour). Consider UI aspects beyond the actual reporting presentation software itself. In the case of the Fiori Launchpad, for example, there's a blurring of distinction for many when it comes to analytics and dashboards - dynamic launch tiles can be enough for a user to satisfy some of their insight needs. Additionally, consider notifications at the device level, which serve to alert users to changes in data circumstances and nudge them towards the solution.
Your IT department's device hardware policy may very well be a constraint in designing a mobile reporting solution, but in fact it's not as restrictive as you might first think. Have an Apple iPad only policy, or a Galaxy Tablet only policy? That doesn't mean the design of the solution needs to be iOS or Android native. When it comes to mobile, there are three general platform options: OS-native, Web-native and Hybrid. The general pros and cons of each have been compared many times before and I won't re-hash that old chestnut here.
Let the requirements defined in the Discovery phase drive this part of Design. If you have a choice, ask yourself first why you wouldn't start with Web-native. The other two options only serve to restrict the target audience for the second most valuable asset your company has (yes, I'm saying that the most valuable assets are the people, in case you hadn't realised the extent of how much old age has mellowed me).
Develop You've moved down from 50,000 feet in discovery, through 10,000 feet in design, and now you're at ground level, ready to bring the solution to life. Here are a couple of things you must consider if you want to make your mobile reporting solution sing.
The data that will power your reports has different aspects. Location: Where is the source of truth? Stability: How often does it change? Size: How much of it is there, and what are the aggregation requirements? Different parts of the data set you're using will look different across these aspects.
This means that you can - and must - consider the best way to manage that data. Can you preload and cache sets of values that don't change frequently? If so, how much can you afford to store on the mobile device, and how do you make the initial load painless? How much processing is required to present the data in a way that's meaningful to the user? Do you push aggregation and calculations to the server side, or rely on the device to process that locally? How does this link with users' understanding of "works offline"?
The answers to these questions should inform how the solution is developed, whether that's a solution based on standard tools, or a completely custom approach.
As likely as not, the question of "what 'mobile' means" will have surfaced in the Discovery or Design phase. If you've got to the Develop stage and it hasn't, that's a big warning sign meaning you may want to consider iterating back through those previous stages.
The mobile platform as a target for any application solution is naturally unstable. New devices are coming out all the time, with different capabilities and screen sizes. If you look carefully at the history of the SAP Fiori Design documentation, you'll notice that one of the 5 key principles was quietly changed, without so much as a browser alert message. "Responsive" became "Adaptive", and signalled a subtle shift in the philosophy that drives how apps should respond to being executed on different devices.
An adaptive approach should be a key consideration in how the user interface (UI) is developed. A great example of how this can be achieved is by using the facilities presented by SAP's UI5 toolkit. The support for mobile devices, device detection, and dynamic view declarations go a long way towards helping you create a solution that works not just on one device, but many, now and into the future.
It's almost time to let your users loose on your solution. Before you do, remember these two important points.
This is where we can delight in the concept of "meta", so wonderfully celebrated by that most mind-expanding of authors Douglas Hofstadter. Insight about insight is what this best practice is all about. Especially when you turn that meta-insight into action. When delivering the reporting solution, include a layer of usage analytics, so that you can learn about how your users are actually using the solution. What are they selecting? How are they navigating? What parts of the solution are they not using? What times of day are more popular than others?
We've used Google Analytics, embedded in a Fiori context via a plugin mechanism, to great effect. It doesn't have to be Google Analytics; it just needs to tell you what you need to know to take action to improve the solution over time.
If data is your second most valuable asset, you need to protect it. This means finding the right balance between low-friction and security when it comes to data access - not getting in the way of the right people, and totally getting in the way of the wrong people. This is where the delivery of your solution must coincide with the security policies that your organisation already has.
If you have an OS-native solution, or a Hybrid based solution, you can bind the apps into the deployment and management mechanisms already in place for your mobile devices. If you have a Fiori-based solution, you could consider adopting the SAP Fiori Client, or a derivative thereof (with Kapsel), to participate at this level. If you have a purely Web-native solution, then you can go the Hybrid route or consider plugging in a timeout mechanism that will remove cached data and navigate away from where the user was. This mechanism can be bound into the overall reporting solution in a straightforward manner that is pretty much independent of the actual solution implementation.
There's a lot to consider in any solution, but particularly, the combination of UX, data that is to provide insights, mobility and devices that are somewhat out of your control is a heady mixture that can cause headaches. As long as you bear these best practices in mind, you know at least you're building in the right direction.
As Buzz Lightyear might say if he were building a mobile reporting solution: "To insight, and beyond!".
This post explores one particular pattern that is inherent in how recursion is often expressed in some functional languages, and finishes with the alternative based on what I'm going to call "list machinery" - mechanisms within a language that provide powerful processing abstractions over structures such as lists.
Erik Meijer, whom I'll mention properly in a moment, uses a phrase "if you stare long enough at it ...". This really appeals to me, because it expresses the act of focus and concentration in a wonderfully casual way. I've stared at this stuff long enough for it to become something tangible, something recognisable, and hopefully there's useful content here for you to stare at too.
It was my son Joseph that introduced me to the concept that has intrigued me since the first day I saw it. Proficient in many different languages, he was showing me some solutions to Project Euler challenges that he'd written in Haskell. They involved a fascinating approach using pattern matching. Determining the resulting value of something based upon a list of possible matches on the data being processed. It involved expressions involving the symbols x
and xs
. This is very abstract, but it will become more concrete shortly.
The next time I encountered this pattern matching technique was in a series of lectures by the inimitable Erik Meijer. These lectures are on functional programming techniques, and the series is called "Programming in Haskell", although the concepts themselves are explained in terms of other languages (C#, LINQ) too. I thoroughly recommend you spend some time enjoying them. One thing that Erik said a lot was "x over xs", which is expressed as x::xs
.
Being somewhat intimidated by the M-word (monad), I have avoided Haskell so far, although my interest in functional programming in other languages (such as in Clojure, and with Ramda in JavaScript) has grown considerably ... I presented at SAP TechEd EMEA and also at UI5con, both in 2016, on functional programming techniques in JavaScript.
And now, learning elm, I re-encounter these pattern-matching patterns again. I think it's because, at least to my naive mind, elm seems to reflect a lot of concepts from Haskell (and from Clojure, for that matter). The patterns are expressed nicely in an online book "Learn You an Elm"; the book is very much a work-in-progress but definitely worth a read even at this early stage.
The examples in this post will be in elm.
It turns out that the wonderfully succinct expression x::xs
represents one of the core concepts in functional programming. A list can be seen in two parts - the head, and the tail. The first element, and the rest of the elements. So x
represents the head of a list, and xs
represents the tail. And the ::
? That represents the concept of "cons", which has its own page on Wikipedia but I'm going to call "prepend" for brevity.
One thing to bear in mind from the outset is that in functional programming, there are no loops. Not as you or I might understand them, at least. But if you think of a list of items that you want to process "in a loop", the concept of "x over xs" is what you need.
If you want to transform a list, by applying some function to [each element in] that list, here's how it goes, using that concept. Remembering that functional programming is less about describing the 'how', and more about stating the 'what', we can say that the new list is the result of the function applied to the head of the list (an individual element), combined with the result of the function applied to the tail of the list (a smaller list).
And the function applied to the tail of the list is the result of the function applied to the head of that list, combined with the function applied to the tail of that same list.
And so it goes on, until there are no more elements in the (ever decreasing) tail to which the function must be applied.
The function is called recursively in this fashion.
Here's an example. If we have a list [1,2,3,4,5]
and want to compute the sum of all the elements in that list (15), this is what the pattern matching approach looks like.
sum : List number -> number
sum list =
case list of
[] -> 0
(x::xs) -> x + sum xs
Calling the function on the list gives us what we're looking for:
sum [1,2,3,4,5]
--> 15
Let's extract the core pattern matching approach here, in the case
expression:
[]
matches an empty list. To be useful (ie not go on forever), recursion needs a base case. This is the base case.
(x::xs)
is a pattern that matches two parts - the head and the tail of the list. If the earlier []
didn't match, then we're going to have at least one element, matched into x
, and any further elements, if they exist, are matched into xs
.
(For those of you who, like me, are not steeped in strongly typed languages, the question "what happens if we don't pass a list at all, just, say, a string?" doesn't even come up, as the elm compiler won't allow that to happen.)
The sum of an empty list of numbers - the base case - is zero, clearly. The sum of a non empty list of numbers is where we see the recursive nature of the definition: it's the first number added to the sum of the rest of the numbers. And while contemplating this beautiful simplicity, consider also that this is an example of how a functional approach to programming is declarative, rather than imperative. Rather than explaining how to compute the sum (which we'd traditionally do with a loop and some variable to accumulate the final value), we're just saying what it is.
Let's examine a few more instances of this pattern matching. I'm going for quite a few examples, so you can stare at them all for a while.
First, how about calculating factorials:
factorial : Int -> Int
factorial n =
case n of
0 -> 1
_ -> n * factorial (n - 1)
factorial 5
--> 120
We have the same approach here: matching the base case, where n is zero, and then declaring that the factorial of n is just n multiplied by the factorial of n - 1. In this particular example of pattern matching, we're not interested in capturing the matched number (as we already have it in n), hence the _
in the pattern.
How about calculating the length of a list? That's simple:
length : List a -> Int
length list =
case list of
[] -> 0
(_::xs) -> 1 + length xs
length [1,2,3,4,5]
--> 5
Again, we see the same pattern.
This time we'll use the pattern matching approach to produce the reverse of a list.
reverse : List a -> List a
reverse list =
case list of
[] -> []
(x::xs) -> reverse xs ++ [x]
reverse [1,2,3,4,5]
--> [5,4,3,2,1]
This time, the base case - an empty list - results in an empty list. Otherwise we take the head and prepend the reverse of the tail to it, recursively.
Now let's define our own take
function, a common facility found in functional languages that are (naturally) list-oriented. The function returns the first n elements of a list.
take : Int -> List a -> List a
take n list =
if n <= 0 then []
else case list of
[] -> []
(x::xs) -> x :: take (n - 1) xs
take 3 [1,2,3,4,5]
--> [1,2,3]
Here we have something extra. There are two base cases - where the list is empty, but also where the number of elements to take is zero or less. But otherwise the pattern is the same.
This time, we're going to need the if then else
expression to declare the recursive definition for a function that returns whether an element is a member of a list.
member : a -> List a -> Bool
member a list =
case list of
[] -> False
(x::xs) -> if a == x then True else member a xs
member 3 [1,2,3,4,5]
--> True
member 6 [1,2,3,4,5]
--> False
If the list is empty - the base case - then the answer is clearly going to be False. Otherwise we check to see if the head of the list is the same as the element to find, and if it is, then the answer is True; otherwise we recurse with the tail of the list.
Finally, here's a definition of a function that will return the maximum value in a list. Note here that the function's type signature uses the comparable
type.
maximum : List comparable -> comparable
maximum list =
case list of
[] -> Debug.crash "Maximum of empty list?!"
[x] -> x
(x::xs) -> let m = maximum xs
in if x > m then x else m
maximum [1,2,3,4,5,4,3,2,1]
--> 5
Here we see another construct - the 'let expression', similar to how 'let' is used in Clojure for bindings. It allows the creation of short-lived values, similar to scope-limited variables in other languages. What we're saying for this last pattern case (x::xs)
is that we want to calculate the maximum of the tail of the list and assign that to m
, and then in the context of that, check whether the head of the list is greater than that or not.
Elm has a builtin function max
that will return the maximum of two comparables. The "Learn You an Elm" book points out that this can allow us to be even more succinct in our maximum function, like this:
maximum_ : List comparable -> comparable
maximum_ list =
case list of
[] -> Debug.crash "Maximum of empty list?!"
[x] -> x
(x::xs) -> max x (maximum_ xs)
maximum_ [1,2,3,4,5,4,3,2,1]
--> 5
Wonderful.
I've talked at length about recursion, and perhaps you too can see the beauty therein.
So what about list machinery? Again, following the 'what not how' philosophy, let's look at a little bit of list machinery in the form of the swiss army chainsaw function reduce
, which in elm (and other languages) is known as foldl
- for "fold-left".
We'll re-implement the maximum
function with foldl
and the max
builtin function:
maximum_ : List comparable -> comparable
maximum_ list =
List.foldl max 0 list
maximum_ [1,2,3,4,5,4,3,2,1]
--> 5
The foldl
function reduces a list by applying a function in turn to each of the elements. The function to apply should have two parameters - the first to accept the accumulated value and the second to accept the list element being folded over. So here, the function to apply is max
, which takes two arguments, and the initial value for any maximum comparison (of positive numbers, at least) should be zero.
Perhaps more wonderful even than the recursive version? Remember that we're processing each element of a list, without any concern as to how that processing happens - we leave that to the language's list machinery to deal with for us.
And to finish, how about using foldl
to reverse a list, so we can contrast it with the recursive definition earlier?
List.foldl (::) [] [1,2,3,4,5]
--> [5,4,3,2,1]
(The cons function ::
can be used in function position by surrounding it in brackets.)
Gosh. Worthy of a long stare. Don't you think?
Postscript:
A day after publishing this post, I find myself at Lambda Lounge in Manchester this evening, hosted at MadLab, where we're learning about Elixir, a dynamic functional language on the Erlang OTP platform, and a short way into the session, this sample is presented to us:
defmodule Factorial do
def of(0), do: 1
def of(n), do: n * of(n-1)
end
Factorial.of(5)
//> 120
While the Elixir syntax may be less familiar to us right now, I'm guessing that the approach here, that we've examined in this post, jumps off the page in a rush of familiarity. Within the Factorial
module, we have a couple of definitions of the of
function, which are used in the same pattern matching way. And we can see the base case defined thus:
def of(0), do: 1
and the recursive call:
def of(n), do: n * of(n-1)
Lovely.
]]>In April 2014, I was honoured to be elected to SAP's Developer Advisory Board. We met for a series of sessions over Thursday and Friday last week in Miami, USA. A long way to go for many of us but definitely worth the trip. It was evident from the diverse discussions that not only does SAP take developers seriously, it also recognises that there's an army of folks who may write the odd line of code or declarative configuration but whose main focus is on wiring up pieces of technology and making them work together.
Until a better word comes along, I'm going to wave my arms in the air and use "devops" to describe this sort of activity. Devops, or "developer operations", is traditionally a representation of three related practices: Development, Quality Assurance and Operations.
In the SAP ecosphere especially, if you look at the sets of activities required to wind up and keep the right combination of spinning tops humming in tune, there's also another practice that we need to recognise, and that is the care and attention of systems and services, integrated between cloud and on-premise. I'm thinking particularly of course of the services within, and connected to, the SAP Cloud Platform (nƩe HANA Cloud Platform).
In 2013 Stephen O'Grady's book The New Kingmakers was released. The book and the phrase "developers are the new kingmakers" resounded clear and true, waking many up to the reality that programming wasn't like laying bricks or pouring concrete; rather, it was the lifeblood of the virtual structures upon which businesses are built and run, a lifeblood that, if treated like a commodity or as a cost, would start to go off.
So who are the "new new kingmakers"? They're the same folks that they always were - the quiet, often unsung army of people building and maintaining software that both balances and differentiates organisations. But alongside, there are folks that build, but in a different way.
They're the ones that connect up - both physically and virtually - the complex machinery, much like a sound engineer creates the right combination of instruments, and the MIDI-based timing coordination between them. The developers write the music, whereas the engineers, the devops folks, make it possible and get the tracks recorded.
SAP is acutely aware there's a landscape in which conversations with the kingmakers need to to take place. That landscape is as varied as the landscape on any planet; flora, fauna, mountains, seas, deserts and everything inbetween. SAP's software and service offering is growing year on year.
Even a single area such as the SAP Cloud Platform has its own diverse language and tech ecosphere, with areas as different as the Cloud Foundry meta-platform (with "BYOL" - bring your own language) and the cloud integration and API management facilities.
SAP's Developer Relations team has already made great strides over the past few years in recognising what the landscape looks like, who and where the kingmakers are, and what they need to remain kingmakers. The team is also very conscious of other organisations' initiatives (such as those from Google, Amazon and the like), how they reach out to the communities, and perhaps critically, how they find, attract and properly welcome net new developers and devops folks alike.
SAP has been cultivating and growing technologies for over four decades (and I've been happily embracing some - not all - of them for three of those four decades). If you'd said to folks fifteen years ago that they'd eventually adopt a REST-informed approach to integration, they'd have laughed at you. Similarly, open sourcing a major piece of technology (OpenUI5), and making JavaScript a first class language in the SAP ecosphere - all almost entirely unimaginable only ten years ago.
With these changes, SAP continues to mature. Moreover SAP is remembering that the people who make businesses work - the new new kingmakers - are more important than ever.
And that makes me very happy.
In the SAP Fiori world, there's a mantra which generally goes like this:
Discover -> Design -> Develop -> Deploy
These four "D" words represent the progressive stages of bringing new functionality into the world, starting with Design Thinking principles, iterating on prototypes early on, building a stable first version and moving it to production. You can read more about this mantra, relating to SAP's User Experience as a Service, in Digital devolution in local authorities - Putting people first.
However, there's a fifth "D" word that is crucial. That word is "Debugging", representing activities that kick in around "Develop" and live well beyond "Deploy".
Don't get put off by the knee-jerk negative connotations that the word "debugging" might conjure up. Sure, it represents the act of diagnosing and fixing problems with code, with apps, especially those already in production. But it also represents the more positive aspects of being able to thoroughly understand an app from the inside, behind the scenes.
Not only to be able to support it and to address issues, but also to have the wherewithal to properly extend and enhance it, introducing new features in a sane and safe way, sympathetic to the existing design, architecture and codebase.
SAP Fiori apps can be complex beasts. This complexity comes at different levels. While the Fiori design language demands simplicity of user experience, realising that minimal user interface is far from simple; it's not magic that causes a swan to glide effortlessly and smoothly across the water, it's a combination of muscle coordination, fluid dynamics and, well, pretty furious paddling under the surface.
Another level of complexity is in the implementation itself. UI5, the industrial strength HTML5 toolkit with which Fiori apps are built, is commonly referred to as being "enterprise grade". What that means is that it has a set of features and facilities that are key to creating business apps that can go global: Support for the Model-View-Controller development approach, data binding, internationalisation, control hierarchies and asynchronous network requests to satisfy business data consumption are just some of the ingredients that make Fiori apps what they are today. Moreover, these ingredients make for complex scenarios under the hood.
There's a difference between complex and complicated. By the nature of what is being achieved by building rich business apps that run in the browser and reach back into SAP systems to transact, lots of things have to be properly orchestrated. But that's different to complicated. Any fool can build something complicated - to paraphrase a famous quote that's often attributed to Mark Twain:
"I didn't have time to build a simple app, so I built a complicated one".
Building something complicated is not the same as building something that is inherently complex.
With traditional ABAP-based apps that run in SAPGUI, it became second nature long ago to enter "/h" in the OK-code field and jump into debug mode. The ABAP debugger tools have improved considerably over the decades and we've come to expect not only facilities to help us wade through and find what we're looking for, but also (and perhaps more importantly) for folks in development and support teams to understand what they're looking at and to be able to find their way around behind the curtain.
So it is also with HTML5-based SAP Fiori and UI5 apps today. What facilities do we have at our disposal to help us navigate the complexities of a modern business app?
Well, for a start we have the excellent UI5 Software Development Kit (SDK). Even from early versions it has been rich in documentation, samples and more. All debugging endeavours should start with a reference guide, the "sine qua non" that acts as a backstop, definitive documentation and final arbiter of how things work, or should be working.
Then we have built-in tools. Chrome is an incredible feat of engineering, providing near unrivalled development and debugging facilities for the creation and support of HTML5-based apps. It also has a pretty fine browser built in*! Time taken to teach oneself about features of the Chrome Developer Tools is time well spent.
Furthermore, the UI5 runtime itself has some rather good facilities for debugging, notably the UI5 Inspector tool, which can be summoned easily in any running Fiori app.
What binds the reference material with what the tools can tell you is something else, though. A knowledge and general understanding of how UI5 and Fiori apps tick. How they are loaded, form themselves into functionality in the browser, and bring data and UI to life on the screen.
A painter doesn't pick up a brush and wield it like an axe against the canvas (well perhaps some do, but that's another story). They hone their skills and perfect their understanding of how their movements influence the brush, how the brush touches the canvas, and how the colours glide and combine.
I find that debugging a Fiori app is like having a conversation. A conversation with the control flow, a conversation with the mechanics of the runtime and often a(n imaginary) conversation with whoever was responsible for building the app. Understanding how to have that conversation is key.
So I was delighted when SAP Press gave me an opportunity to write a short guide to help us orientate ourselves with what we find under the surface of a Fiori app. That orientation is designed to show us how to use the tools available and have good debugging conversations with the runtime. It's in the form of an E-Bite, and is available now from their website:
SAP Fiori and SAPUI5: Debugging The User Interface
If you can see a Fiori app in your browser (I'm going to assume it's Chrome - if you're at all concerned with standards, security and reliability, why would you be running anything else?) then you're ready to start your debugging conversation.
Open up a separate window with the UI5 SDK, invoke the Chrome Developer Tools, and bring up the UI5 Inspector. And maybe even grab a copy of the E-Bite :-) Then you're ready to begin your debugging conversation. Start with "hello" and have a look around. You won't regret it.
* Yes, for those geeks amongst you, this is an oblique reference to the traditional description of the Emacs editor: "A fully featured operating system, lacking only a decent editor".
wildignore
setting working properly in vim, so I thought I'd document it here.
The wildignore
setting is used in conjunction with the path
and other settings such as wildmenu
, and supposedly can be used to specify filename patterns that should be ignored when searching for files, for example using the find
command.
I'm running a fairly modern version of vim (7.4) compiled with the +wildignore
option:
dj@pipetree ~ $ vim --version | grep wildignore
+cscope +lispindent +python3 +wildignore
dj@pipetree ~ $
but specifying a pattern such as this:
set wildignore=node_modules/**
or variations thereof, such as
set wildignore=**/node_modules/**
or even
set wildignore=**node_modules**
didn't have the desired effect. I'm wanting to exclude content from that folder (and the subfolders contained there) as they don't form part of the codebase I'm working on in my JavaScript projects.
What works for me is
set wildignore=/home/dj/**/node_modules/**
In other words, an absolute path (I've tried using the tilde too, to no avail).
This is good enough for me, but it does irk me that something that should be so simple ... isn't.
]]>dj-Better-collaboration-through-chemistry-content.jpgCollaboration works best when you're in the same room, working together, and there's low friction between you and the tasks that you're trying to carry out. This is often not how it works in reality, however. Demands on time and location mean that you're often working together, but remotely.
Moreover, the tools you use often hinder rather than help along the processes you're trying to follow. With today's cloud-based offerings, an awareness of best practices and a good dose of common sense and organisation, this reality can be improved considerably.
In this case, the collaboration was centred around the client's Fiori transformation activities. Moving towards a user-centric view of business processes means embracing Fiori, as a philosophy and as a platform. The best way to make that successful is to find the appropriate balance between standard and custom, bringing about the right solution for each of those business process. Here, this meant adapting standard SAP Fiori apps and extending them to fit. It also meant extending the Fiori Launchpad itself with features that were essential to the client.
In turn, this implied using the right tools to extend and test Fiori apps, employing best practices, and embracing the developer workflow that teams outside the SAP developer ecosphere, especially those in the open source world, have enjoyed for a long time. Workflow that includes automatic code checking (also known as linting), distributed source code control, feature branch based development, and peer-to-peer code reviews.
Before diving in, let's circle around in the cool autumn air and examine the building blocks of the solution from above.
(If you haven't guessed already, Workstation 1 represents a client team member and Workstation 2 represents a Bluefin team member.)
In the client's on-premise part of the landscape, we have the usual suspects - a backend ECC system fronted by what we used to call the Gateway system but which we refer to now as the Frontend server, as it contains the UI Add-On supplying the Fiori infrastructure too. In addition to that, we have the HANA Cloud Connector (HCC) connecting outwards to the HANA Cloud Platform to create a secure tunnel.
In the cloud, there's the HANA Cloud Platform (HCP) which provides (amongst many other services) the Web IDE, my number one tool of choice for Fiori implementations. There's some reverse proxy and routing magic that is represented by destination definitions too, allowing connections from services on HCP to be made to on-premise systems via the HCC. In the context of the Web IDE, there are two functions that these connections enable: Consumption of OData services, and access to the repository of Fiori app artifacts.
There are also non-SAP services we're using. The first is GitHub, as the master repository for our Fiori artifacts and the changes that we're making to them, and perhaps just as importantly as the place where we manage development tasks, organise work into feature branches and perform code reviews. Then there's the collaboration and communication tool that's taking the world by storm: Slack. It's like IRC for the 21st century, and is - dare I say it? - a game-changer. And finally Trello, where we coordinate the overall tasks at a higher level of granularity, including those not specific to development. The biggest benefit of Trello is the immediacy - from idea to onboarding the team to ideas capture in literally seconds.
In a working session that takes place usually face to face, we plan the overall work items and record them in Trello, great board-style team collaboration in a software-as-a-service mould. The interface is simple - boards, with lists of cards that you can drag around, assign and decorate with other metadata. Once we've done this, it's often the last time the team will be together in person for days or weeks on end.
A lot of these cards in Trello are then translated into tasks in GitHub, stored on the private repository that represents the software artifacts of the project. These tasks are then assigned (sometimes self-assigned) and work is started upon them. Tasks represent either features or bug fixes; they're the same thing from a workflow perspective - a piece of development that needs doing, then reviewing, then being accepted into the collection of artifacts that will make their way eventually to test and production systems.
A feature branch is created in GitHub to contain that work and this branch is pulled into the Web IDE where we spend most of our working day. The integration that the Web IDE offers with git-based source code control systems is ideal (and as of this month it's got even better, with additional access to on-premise git repository systems). Work is done and tested locally in the test harnesses that the Web IDE offers, connecting to the on-premise frontend server to access the OData services upon which the Fiori apps rely.
All this time, communication is taking place in Slack in a private channel dedicated to the members of the extended team which includes both client and Bluefin folks. This Slack-based communication is enhanced by integrations with GitHub and Trello - events in these two systems (such as the opening or closing of a merge request) are posted into the conversation flow.
Work continues in the Web IDE, using facilities such as linting (to catch style and syntax errors at design time) and the extension wizards which make adding extensions to SAP Fiori apps pretty straightforward.
Once a piece of work is ready, it's pushed back to GitHub where a request to merge code upstream is made. This triggers a code review process between the members of the team. A feature branch may also be pulled back into the local repository of another team member's Web IDE for them to be able to test out the changes while keeping their work intact and separate. We have our business analysts do this too (they have Web IDE accounts as well) - they can try out the changes directly on the HCP before we continue.
Based on the code review itself, further code changes may be necessary to address issues highlighted but once they've been made and the review is done, the code is merged into the so-called "master" branch which will eventually make its way back to the on-premise frontend server to be deployed through the SAP system landscape in more traditional SAP methods.
The overarching mantra for code changes, feature branches and merge requests in this new world of feature branch based development is: "Master is always deployable", meaning that anything you merge into master must be tested and reviewed because it could be deployed to production at any time!
One great feature of the combination of the Web IDE and destinations defined in HCP is that the secure access allowed to specific on-premise SAP systems via the HCC is all one needs to partake in this collaboration. I use my own instance of the Web IDE, as do my client colleagues. And we do that from wherever we are. No VPN is required which means that the cloud philosophy and software-as-a-service thereupon is really working well for us. The security and connectivity is managed where it should be - not on my laptop, but in a secure location on SAP's servers.
I've only scratched the surface of detail here. Perhaps you'll be intrigued enough to go and find out for yourself how services such as these can massively improve collaboration and productivity, and how a developer workflow that embraces today's approaches is the right way forward for the Fiori revolution. Maybe you'll investigate what modern communication tools like Slack can do for you and your partners. Hopefully you'll see that using the zero-install HCP-based Web IDE is the right direction for you and your Fiori initiatives.
If there's one thing you take away from this post, it should perhaps be this: the future is here now, and it's there for you to embrace. Go for it.
(This post is part of the F3C series)
There are two videos in the series. In the first, MPJ explores what functors are, and based on material in the blogosphere, makes some statements that aren't quite accurate. So he follows up with a second video correcting those statements, which I think was an excellent way to fix things. There's a lot to be said for learning by watching other people learn.
There's quite a bit to take in from these two videos on functors, so here's my summary of functor essentials:
a functor is a type -- for us, an object or container -- that has a map
method[^n]
this container can contain elements of any type
the map
method tranforms the elements, by applying the supplied function to each of them[^n]
while the elements are transformed, the structure of the container remains intact
the result is a new functor
As MPJ points out, the most common functor in our context is JavaScript's Array
. Here it is in action[^n]:
["the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]
.map(x => x.length)
.map(x => x % 2 === 0)
// [false, false, false, false, false, true, false, true, false]
The point of the second map
in the example is to show that what is returned by the first map
is indeed a functor -- another Array
, in fact -- over which we can call map
again.
While Array
is the most obvious functor, MPJ points out that some implementations of Promises are functors, as are Streams (the latter is the subject of the next episode in this series). There's a Promise example, but it's not quite right - there are a couple of bracket-related typos. Here's what I think it should look like instead:
import Promise from 'bluebird'
const whenDragonLoaded = new Promise((resolve, reject) => {
setTimeout(() => resolve([
{ name: āFluffykinsā, health: 70 }
]), 2000)
})
const names =
whenDragonLoaded
.map(dragon => dragon.name)
.then(name => console.log(name))
One thing I was slightly unsure of in correcting the brackets was replacing the curly braces (the ones that wrapped the object literal in the resolve
call) with square brackets. At one level it is fine - the curly braces were simply not syntactically correct. But I felt as though adding the "container" syntax I was "helping" the Promise be a functor. Moreover, in one of the articles on JavaScript Promises that I read, I picked up the sentiment that doing exactly this was deemed bad practice. Anyway, I'm sure things will become clearer here as I explore further.
[^n]: (although it doesn't have to be called that, I guess - the method name could be different, but have the same effect) [^n]: "lifting" the function into the container [^n]: I'm deliberately using small, generic variable names, as that's what functional programming suggests to me - making things simple and generic means I don't want to inadvertently attach "contextual baggage" with variable names that mean something only in one context
]]>(This post is part of the F3C series)
This video episode was recorded over a year ago. Since that time, a native implementation of Promises is available in my scratchpad of choice, (the developer tools console of) Google Chrome. So there's no requirement for us to use babelify/polyfill
or other similar techniques, we can just go ahead and say:
new Promise((resolve, reject) => { })
and get
Promise {[[PromiseStatus]]: "pending", [[PromiseValue]]: undefined}
Because of this, it would seem that many of the implementation related pitfalls detailed in the Promise Cookbook, to which MPJ refers in his video description, are no longer relevant. Which is a good thing.
Like recursion, asynchronous programming in general and promises in particular make me stop and think. There's an uphill element to thinking about asynchronous tasks, and chaining them together. So I reflected a little longer than normal on what I'd learned from this video. Moreover, I'd been intrigued by what MPJ had mentioned in the video, multiple times, about composability.
I have a general understanding about function composition from my relationship with Clojure. But following the video, I explored some other material, including a talk from Full Stack Toronto: Reduce Complexity with Functional JS by @frontvu. This talk gives a very brief introduction to some of the key concepts of functional programming, and includes this implementation of a compose
function:
var compose = function () {
var funcList = arguments;
return function () {
var args = arguments;
for (var i = funcList.length; i-- > 0;) {
args = [funcList[i].apply(this, args)];
}
return args[0];
};
};
This allows us then to do something like this:
compose(n => n * n, n => ++n)(5)
// 36
In other words, we're composing a couple of functions (the anonymous increment and square functions here) to form a new function.
This is similar to Clojure's native composition function comp
. There are many examples of how comp
can be used, but my favourite one, beautifully simple, is this:
(filter (comp not zero?) [0 1 0 2 0 3 0 4])
;;=> (1 2 3 4)
Anyway, back to promises in JavaScript. In the light of this reflection on comp
and compose
, the point that MPJ was making about promises being composable makes sense. We can think of function composition as chaining functions together, in a similar way to the Unix pipeline idea - the output of one function gets fed into the input of the next. It's almost so simple as to be too hard to understand.
I had found myself on a journey to the centre of the earth just to understand function composition, whereas it had been sitting there innocently in front of me all this time. The ability to string promises together, passing promises and values through the then
chain, is pretty damn powerful. Add to that the ability to treat a list of functions the way that Promises.all
gives us, and there's a compelling argument for getting to know more about promises right there.
(This post is part of the F3C series)
"Recursion is when a function calls itself, until it doesn't" -- MPJ
While superficially flippant, this definition is rather accurate and succinct. It's what I'm going to adopt when explaining it at a high level to someone new.
I think there are two levels to understanding recursion. The first is at the theory level. There has to be a way for the repeat calling to end, either a base case, as was shown in the countdown example, or a situation where the function runs out of data to process, as was shown in the animals hierarchy example.
The second is at the practice level. I've sometimes found it cognitively difficult to "see" the recursive pattern and how it might apply in a solution. It helps to visualise what's happening, and so that's what we'll do here with the animals hierarchy example.
The following is the same code that was shown in the video, but with some extra counting and logging:
let calls = 0
let makeTree = (categories, parent, level) => {
calls++
level++
let node = {}
let children = categories
.filter(c => c.parent === parent)
console.log(
level,
parent, '->',
children.length
? children.map(c => c.id).join(", ")
: "none")
children.forEach(c => node[c.id] = makeTree(categories, c.id, level))
return node
}
console.log(JSON.stringify(makeTree(categories, null, 0), null, 2))
console.log(calls, 'calls')
The variable calls
is used simply to count how many times the makeTree
function is called. There's now a 3rd parameter level
in the makeTree
function signature which we seed with the value '0'; it's incremented each time the function calls itself, so we can count how deep the rabbit hole goes.
This is what the output looks like:
1 null '->' 'animals'
2 'animals' '->' 'mammals'
3 'mammals' '->' 'cats, dogs'
4 'cats' '->' 'persian, siamese'
5 'persian' '->' 'none'
5 'siamese' '->' 'none'
4 'dogs' '->' 'chihuahua, labrador'
5 'chihuahua' '->' 'none'
5 'labrador' '->' 'none'
{
"animals": {
"mammals": {
"cats": {
"persian": {},
"siamese": {}
},
"dogs": {
"chihuahua": {},
"labrador": {}
}
}
}
}
9 'calls'
The main structure is the same. And clearly there are 9 calls. That makes sense, because there are 9 candidate parents (including null
, from the initial call).
We can also see how many levels the recursion descends (5), and how it ascends too, from the last feline parent candidate 'siamese' at level 5, back up to the 'dogs' parent candidate at level 4.
This is what it looks like visually:
1 null
|
2 animals
|
3 mammals
|
+--------+--------+
| |
4 cats dogs
| |
+---------+ +---------+
| | | |
5 persian siamese chihuahua labrador
| | | |
(none) (none) (none) (none)
(The fact that there are no children belonging to any of the candidate parent nodes at level 5 is represented by (none)
).
And there you have it. Practice thinking about recursion, and how it applies to problems like this.
Oh yes, and there's also something I wanted to say about tail call optimisation, but there isn't space here to do any justice to it. Perhaps a subject for a later post. In the meantime, remember the immortal and recursive suggestion from Scarfolk Council: "For more information, please reread".
]]>(This post is part of the F3C series)
The "what" part of currying is quite straightforward. The "why" takes a little more time to understand, but once you do, it's a big "aha" moment.
Currying is the process of taking a function that expects multiple arguments, and turning that into a sequence of functions, each of which takes only a single argument and produces a function that is expecting the next argument. This sequence ends with a function that takes the final argument and produces the value that the original function was designed to emit.
It's not immediately apparent why you'd want to do this. This particular aspect of functional programming has probably remained more of a mystery in the JavaScript world mostly because the facility is simply not available in the core language implementation, and therefore folks aren't as readily versed in its usage.
But if you've started to embrace functional programming in JavaScript and have already enjoyed creating "helper" functions that you can then use (in a composition sense) in other higher-order functions, the reason why currying is useful is clearer.
Useful is a plain word. In MPJ's example, using the curry
facility provided by the Lodash library to enhance the way filter
is employed, is actually quite beautiful. His exclamation "wow" (I heard it in lower case) should perhaps have been more "WOW".
Here's the relevant section of the example code:
let hasElement =
_.curry((element, obj) => obj.element === element)
let lightningDragons =
dragons.filter(hasElement('lightning'))
hasElement
is the helper function that is used to dynamically filter the data we're looking for. It has a common pattern ("does a particular property have a particular value?"). But the original (pre-currying) invocation was a little cumbersome:
dragons.filter(x => hasElement('lightning', x))
With the new ES6 syntax we're already reducing the amount of code using the fat arrow syntax. But with currying, we can reduce it even further, not with syntax, but by embracing currying and partial application. Moreover, we get even closer to saying what we want, rather than how to get it.
One thing that occurred to me, that MPJ didn't mention explicitly (perhaps it was too obvious), is the order of the parameters in the hasElement
function. They're deliberately set that way round, so that currying will work well for us. If you stare long enough at the example code, you'll realise that this is because of what's going on:
hasElement('lightning')
, with only a value for the element
parameter, returns a functionobj
parameterfilter
function is going to do exactly this - call the intermediate function, passing each object in the dragons
arrayFor more on the order of parameters, and a generally very entertaining talk, I recommend Brian Lonsdorf's "Hey Underscore, You're Doing It Wrong!".
]]>Back? Good. The series on functional programming is a playlist on MPJ's channel, called "Functional programming in JavaScript". I thought it might be a nice exercise to summarise each of the videos in a series of posts here, sort of like a companion guide (hence the name "F3C" - "FunFunFunction Companion").
I don't want to take away from the absolutely great videos themselves, you should watch them again and again. Rather, I wanted to try to present the key messages from each video in the series in posts over here. My intention is to keep the posts short (up to 500 words) and to the point, and perhaps include some code of my own.
This sounds to me like a good idea right now, perhaps it might not be such a good idea later on, we'll see.
(Warning: I interchange the words "array" and "list" in a reckless fashion throughout this series, as well as "object" and "map" (the type of data structure); don't worry though, treat them as the same things).
Oh, and if you're in the SAP developer ecosphere and attending SAP TechEd EMEA in Barcelona this year (08-10 Nov) - you might want to come along to my session DEV219: "Building More Stable Business Apps with Functional Techniques in JavaScript".
]]>(This post is part of the F3C series)
Functions have a signature (the parameters) and the body. The body is defined in a block, traditionally in curly braces[^n]. This block defines a scope, directly relating to the body of the function. In other words, it's function scope.
JavaScript has support for closures. This is a powerful feature, which gives function bodies dynamic access to data in the surrounding scope. Or scopes. I had been wondering about whether this access extended to the next outer level only, but in fact, it's access all the way down (or up?). Consider this:
var a = "Something"
function deep() {
var b = "The Universe"
function deeper() {
var c = "Everything"
function deepest() {
console.log(a + ", " + b + " and " + c)
}
deepest()
}
deeper()
}
a = "Life"
deep()
// => Life, The Universe and Everything
The deepest
function has access to not only c
, but b
and a
also.
Using closures is such a natural part of JavaScript, if you're not too familiar, you just have to practice and get them under your skin. In languages that don't support closures, you can end up with a lot of explicit signatures and passing of data to callbacks, which can get messy and over-busy. So closures are also good for reducing the footprint of your code, which we know already means fewer chances for bugs.
[^n]: Although with single-expression bodies introduced with the ES6 fat arrow, we don't need the curly braces.
]]>(This post is part of the F3C series)
Reduce is powerful, much more than its siblings. You can use it not only to sum values (the classic "hello world" example for reduce
), but also to build up a complex end result that may look nothing like what you started with, i.e. the list you are calling reduce
upon. It's a mistake to think of reduce
in terms of what that word means in English; you're not necessarily making something smaller than what you started with, you can make pretty much anything, of any size[^n].
As well as function composition (slotting functions into each other), another common style in functional programming is function chaining. The binding together of small functions that operate on data one after the other[^n].
The example that MPJ uses in this video is a good illustration of chaining. Aside from the call to console.log
, the entire program is a single statement - an assignment of a value to the output
variable:
var output = fs.readFileSync('data.txt', 'utf8')
.trim()
.split('\n')
.map(line => line.split('\t'))
.reduce((customers, line) => {
...
}, {})
What MPJ doesn't say explicitly, but is one of the reasons why this approach is so powerful and simple, is that each of these functions take input, and produce new output. There's no mutation of state. This means that there is less to go wrong. Further, apart from trim
, all functions produce or operate on lists:
split
operates on a string and produces a list
map
operates on a list and produces another list
reduce
operates on a list and produces ... well, whatever you want
Notice that this part: map(line => line.split('\t'))
actually produces a list ... of lists.
[^n]: It may help to think of reduce by its other common name, in other languages: "fold", a name which signifies the action of executing the callback function on each element in turn.
[^n]: This reminds me of the Unix philosophy of small programs doing one thing well, connected and passing data via a series of pipes.
]]>(This post is part of the F3C series)
Higher-order functions map
, filter
and reject
perform list transformations, where each time, the end result is still a list. find
is a related function which is designed to return just a single element.
reduce
is a related higher order function, but the shape of the end result is whatever you want it to be. It's like the swiss army chainsaw of list transformations. Unlike the functions above, reduce
takes two parameters. As well as the callback function, it takes a starting value, which - after the accumulation that takes place when processing each of the list's elements - becomes the end result.
The starting value can be any "shape" - a scalar, an array, or an object.
This means that if map
, filter
, reject
or similar functions don't do what you need, you can write your own using reduce
. A common fun exercise is implementing those functions with reduce
, too.
Let's reimplement reject
, this time using reduce
:
Array.prototype.reject = function(pred) {
return this.reduce((l, x) => {
if (! pred(x)) l.push(x);
return l;
}, []);
}
The function implementation transforms the given array (in this
) with reduce
. Here the starting value is []
, an empty array, and we use similar logic with the passed-in predicate function to determine whether each element should be accumulated into the starting value or not.
(This post is part of the F3C series)
Moving on from the higher-order function filter
in the previous video, another higher-order function is introduced: map
. map
is similar to filter
in that it also works on an array, producing another array. It is different to filter
in that the function passed in should output elements for the array being produced, rather than boolean values that dictates the presence of elements in the new array.
So map
"transforms" arrays. Compared to the imperative version of producing a list of animal names, the functional version with map
is an awful lot shorter. It gets even shorter with the introduction of ES6 arrow functions. Shorter code means less surface area for bugs, but it also improves the readability, and arrow functions help with this too.
Further, as the functions are so short, embellishing them with "meaningful names" for the parameters actually detracts from that readability, so as is often the style with functional programming elsewhere, short parameter names can be used to good effect.
The shortest version that MPJ comes up with is 39 characters, but there are extraneous brackets around the parameter that we can remove, reducing it even further:
var names = animals.map(x => x.name)
Lovely.
]]>(This post is part of the F3C series)
Functional programming makes you a better programmer because you're going to be writing less code. Less code because you're able to reuse more, and also because you're not having to write 'mechanics'. You're writing more what you want, rather than how to get it. There are fewer bugs too, not only simply because there's less code, but that code is easier to reason about.
Functions are values. They can be passed around in variables and "slotted into each other". Functions that take functions as arguments are called higher-order functions. Functions that produce other functions as results are also higher-order functions. Composability is an aspect of functional programming, in that small, simple functions can be combined. A small enough function with no cognitive or contextual baggage is more likely to be reusable, too.
The filter function (Array.prototype.filter
) is shown as an example of a higher order function. Its use, to filter an array of animals, is compared to the imperative approach to do the same thing. This latter approach is more difficult to reason about, because there's more code, and more going on. What's not said explicitly is that in the imperative version, there are more variables whose values change. This mutable state in general brings about risks of bugs, and makes code harder to reason about and also to debug.
MPJ mentions the function reject
which he mistakenly attributes as a standard function on Array
s. There isn't one (you can employ a functional programming library such as lodash or underscore to get it), but I thought I'd have a go at writing one.
Given the animals
array in the video, here's how one might go about adding a reject
function, and using it:
Array.prototype.reject = function(pred) {
return this.filter(function(x) {
return ! pred(x)
})
}
reject
is a (higher-order) function that take a function as its argument. I'm using the parameter pred
here for this; the word "predicate" is often used to describe this sort of function (one that returns a boolean, often used in this sort of context). The array upon which reject
is made to operate (represented by this
) is filtered, and the predicate function is used to determine whether each array element remains or not. Note the negation (!
) as here we want to throw away, rather than keep elements that pass the predicate test.
I'm an SAP dinosaur, and not ashamed to admit it. I embraced S/370 assembler, and was bathed in the glow of the green screen 3278 terminals that I used with SAP R/2 a long time ago. Even today I dream of a return to punched cards and Job Control Language.
But that's not going to happen, and apart from some odd exceptions (like me), the world breathes a sigh of relief. Joking aside, there's a revolution that's been gathering pace since mid 2013, when SAP Fiori arrived on the scene. We've covered many aspects of Fiori on our website already, so feel free to inform yourself if you haven't done already. What I want to talk about are aspects of that revolution which, if embraced, are disruptive enough to help your organisation move forward as you digitally transform your business and step into the next decade.
First a bit of context, to set the scene. Over the years, SAP have introduced many initiatives to address the lot of the user, but they've been technology driven, sometimes inspirational, sometimes challenging, and often disjointed. SAP Fiori is different, as it's design driven, with the user -- the consumer -- front and centre. The realisation, in the form of technology and platform, is secondary. So Fiori is very appropriate as the right initiative to embrace and disrupt when it comes to people.
The Fiori Launchpad is an important component in the Fiori universe, allowing direct and realtime visibility of KPIs, and consumption of apps, in one place (across all devices blah blah, yep, we know the score). Look one level up from the Launchpad, and consider what you see. The Launchpad, as well as the well-designed app-based approach to consuming functionality, is suitable and available for your business partners as well as your internal departments. Imagine that!
Why keep the goodness of all that Fiori has to offer to yourselves? Spread the love to your customers and suppliers, and they're more likely to reciprocate. How often have you gritted your teeth before launching into some old style portal, at your desk, just to check on the progress of an order or invoice?
What's more, with the power of the SAP HANA Cloud Platform, Portal Service, you can even move away from the tile-based approach, while still benefitting from all the design goodness, by building freestyle sites that are both fit for purpose but also espouse the Fiori best practices for app-level interaction.
SAP Fiori is here, and it's here to stay. Not only for what we traditionally think of in terms of SAP ERP systems, but also for other products - Ariba, Concur, Lumira to name but three. Gradually, the Fiori revolution is coming to these systems.
The architecture for Fiori is based largely on the concept of a "frontend server", which as the name suggests, abstracts away the nuances of whatever SAP systems sit behind it - ECC, CRM, SRM and more. So we have connected systems and processes through a uniform interface. Dissonance and impedance that normally arise through context switching and different user interfaces can melt away.
It's not only SAP systems that can participate in this connected state. The toolkit that powers Fiori, UI5, comes in two flavours - one with an associated SAP licence, the other with an Apache 2.0 open source licence. The latter, named OpenUI5, means that you can safely and legitimately provide a Fiori look and feel for your non-SAP systems too, further harmonising the experience across your enterprise.
I've hinted at this disruption already. Yes, we all know about the realtime dashboards from the BI stable. There are plenty of tools over there that can give you insights into data in realtime. But it's "over there". There's a disconnect, a cognitive gap when you switch from your transactional tools to your analytical ones.
With Fiori, and the infrastructure that comes with it (the Launchpad, the Overview Page concept, Smart Business and more) that gap disappears. Your users (and partners for that matter) can move from insight to action in one smooth transition, because both are in the same place, on the same page (for examples of this, see The SAP Fiori Launchpad as a dashboard for my running KPIs).
I've worked with SAP technologies for 30 years, and I suggest that never before has there been such an opportunity, with joined up technology initiatives, for SAP customers to embrace and make their own, to move themselves forward and beyond where they are right now. Yes, I hear you say, it's only a frontend. OK, fair point. But it's seamless, connected and live. Most importantly though, it's focused on people - you, your users, and your business partners. Systems and business processes don't make a difference. People do.
SAP Inside Tracks are community-organised events for like minded people to come together to share knowledge on SAP related subjects. These subjects are commonly of a technical nature, but the range is far and wide overall. Beyond knowledge sharing, the events are a great opportunity to network and build local communities. Think "unconference" rather than "conference" and you won't go far wrong!
The events have been taking place across the globe for a number of years; take a look at the dedicated SAP Inside Track space on the SAP Community Network for more information. This is the fourth time we've run the event in the north of England ... in 2013 we held it in Manchester, in 2014 and 2015 we were graciously hosted by the folks at Sheffield Hallam University, and this year we returned to our roots, running SAP Inside Track Manchester, or "sitMAN", in the birthplace of the industrial revolution and the true home of computing! This time around the excellent Manchester Digital Laboratory - MadLab to most people - was our host, not only for the two days of sessions but also for the evening event at the end of the first day.
In the run up to the event, attendees submit session proposals, which are then organised into one ormore streams shortly before the start. Our event had a good mix of content on Day 1, covering the HANA Cloud Platform (HCP), the Internet of Things (IoT), User Experience, SAP S/4HANA, BI, Security, UI5 and more.
Sourced from the community - the attendees themselves - the session topics are based upon whatever someone wants to talk about. This brings a really interesting dynamic to the event. For example, beyond the technical topics already mentioned, we also discussed ways to tackle the mountain of email under which we all find ourselves.
Day 2 is normally given over to an all day hands-on workshop. This time, we ran it on building custom tiles on the Fiori launchpad. Sounds familiar? Of course - it was related to what we presented in our Bluefin webinar event The SAP Fiori Launchpad as a Human-Centric Dashboard from earlier this year.
As has now become tradition at sitMAN, we finished off Day 1 in style, with an organised beer tasting seminar run by The Beermoth, a most excellent purveyor of fine beers just round the corner from MadLab. Thence we naturally furthered our exploration of all things beer in nearby classic pubs, from Port Street Beer House, through the Marble Arch, Smithfield Tavern and beyond. Building and sharing SAP tech knowledge is thirsty work, you know!
Most SAP Inside Track events are free to attend, or there's a nominal fee. Our sitMAN event was free. But the venue hire and beer seminar wasn't, so it's only right to thank the sponsors of the event, without whom it couldn't have happened. Our very own Bluefin Solutions, ITelligence, Resulting IT and Zoedale Ltd - please stand up and be counted, and accept our thanks again for helping make this happen. Of course, the event doesn't just happen of its own accord; there's a core set of folks behind the scenes who do a lot of the organising, so many thanks are due there also!
SAP Inside Tracks are essentially grass roots events, and this one was no exception. No pomp, circumstance or ceremony, just plain old community spirit and down to earth practicality, and above all, a willingness to participate and share knowledge and stories. So the final thanks should go to the attendees who not only turned up, but took part and made the two days a great success. Until next year, happy hacking!
The SAP Fiori Cloud Edition is here. Actually, perhaps I should call it "SAP Fiori, cloud edition" or even "SAP Fiori, cloud service" - the name keeps changing, but thankfully the service is the same.
It was made generally available (GA) at the end of the first quarter of this year, and is definitely something you should be looking at for your Fiori journey.
So first of all, what is it? Well, it is pretty much exactly what it says it is - it's Fiori, in the cloud. But to understand what that actually means, let's step back and look at Figure 1 - a simplified diagram of a typical on-premise architecture that includes Fiori in a traditional ABAP stack context.
Figure 1: Simplified architecture for on-premise Fiori
As we know, Fiori is many things, including SAP's strategic approach to User Experience (UX) across all products, a series of detailed design guidelines, and a collection of actual apps. To be able to install and make those apps available for users in your organisation, you need a number of components. One is SAP Gateway, providing the backend enablement for OData as well as the frontend exposure as consumable OData services. The other is the SAP UI Add-On for Netweaver, providing the infrastructure for Fiori - the Launchpad and related shell services, the UI5 runtime, and more.
In addition, the Fiori apps you choose to implement must be installed... and they're installed on the same server as the UI Add-On and the frontend Gateway components (there's a backend OData component to each Fiori app also, but we'll leave that for now).
The usual recommended approach is to have a "frontend server" containing the UI Add-On and the frontend Gateway components, and acting as a container to hold the Fiori apps themselves. If you've already installed a Gateway hub from pre-Fiori days, that's great and an ideal candidate for becoming such a frontend server.
But if you haven't, and want to get started with Fiori, then you'd normally need to install, configure and maintain a standard tiering (development, test & production) of ABAP stack systems to act as that frontend server. That comes with capital and expense costs as with any new SAP server install, not to mention the long term maintenance.
With SAP Fiori Cloud Edition, this requirement goes away. The services that would normally be provided by the frontend server are made available to you and your users, in the cloud - on the SAP HANA Cloud Platform, to be precise.
Figure 2 shows the same Fiori context as we saw in Figure 1, but instead of the on-premise frontend server, the SAP Fiori Cloud Edition services are employed.
Figure 2: SAP Fiori Cloud Edition removes the need for a frontend server.
The entire Fiori infrastructure, including the Launchpad, the UI5 runtime, and the Fiori apps themselves, are provided as part of this cloud service.
Also included is the HANA Cloud Integration OData Provisioning service, known as "HCP, OData provisioning". This is what was previously known as Gateway as a Service (GWaaS). HCP, OData provisioning provides the equivalent services that the OData components on a frontend server would normally provide (the rightmost Gateway box in Figure 1, that is): Connect to the backend server to coordinate the calls to the OData enablement ABAP classes, and expose the results in an OData shape and colour.
Finally, with SAP Cloud Identity (represented by the "Auth" box in Figure 2) connected to an on-premise identity provider (represented by the "IdP" box), you have everything you need to get going with Fiori, without the up front capital investment, server landscape extension, and continued maintenance.
Moreover, you don't need to install or maintain the apps themselves, that's also done for you.
Well, there are no catches per se, but there are some important points of which to be aware. Going into any new SAP software offering without prior knowledge is never a good idea, so here goes:
Backend OData components are still required: The backend enablement components of any given Fiori app are still required, of course - to provide the frontend Fiori app logic with the data and functions from your backend systems of record. This is the leftmost "Gateway" box in both Figure 1 and Figure 2. If you're running on a 7.40 or above ABAP stack, you have these components anyway.
You'll need cloud connectivity: Obviously you'll need to connect your on-premise systems to the cloud. Fear not, this is the domain of the SAP HANA Cloud Connector (HCC, as shown in Figure 2). It's a small Java application that runs within your on-premise environment and connects outwards forming a secure tunnel to the HANA Cloud. You add whitelist entries to allow access from the SAP HANA Cloud Platform to resources in your on-premise landscape - those will be the Gateway endpoints in your backend systems.
Not all apps are available yet: Each Fiori app that is made available in the SAP Fiori Cloud Edition offering undergoes a series of tests and checks, in a provisioning process that ends up with that app available for consumption within the context of the service. This means that not all apps are available right now - but there's a new filter option within the SAP Fiori Apps Library (see Figure 3) that will show you which ones are.
Figure 3: The "SAP Fiori apps on SAP HCP (SAP Fiori, cloud edition)" filter.
You may need to consider bandwidth: Your users connect to your on-premise network will be going out to the cloud to consume the Fiori apps. You may need to consider the bandwidth requirements for this, if you only have a minimal Internet connection. Then again, if you happen to already have a Gateway hub system on-premise, some of the traffic can be kept within the on-premise network to improve latency and save round trips to the cloud.
Is SAP Fiori Cloud Edition for you? Of course, only you can decide that, it depends on a lot of factors. But there's certainly a compelling argument based on the benefits of service-based application consumption from the cloud, benefits which include reduced landscape complexity, maintenance and capital cost. SAP Fiori Cloud Edition has an associated subscription price, but when comparing to traditional on-premise related costs, it can make a lot of sense.
To help make the decision, SAP offers a demo version of the Cloud Edition, where you can pretty much try out all the features, including app extensibility. It's definitely worth exploring, especially as there's no direct cost associated with that!
One of the stumbling blocks we see with Fiori is the requirement for "yet more infrastructure". This is, for me, the biggest selling point for SAP Fiori Cloud Edition. Rather than have a project weighed down by the requirements to get a frontend server up and running and at the right patch levels for Fiori, UI5 and the apps themselves - and to maintain those components at the appropriate patch levels too - you can concentrate on the real task in hand, bringing the beauty and simplicity of Fiori, powered by that awesome toolkit UI5, to your business.
See you in the cloud!
In the early days, the idea existed that there were SAP Fiori apps, and Fiori-like apps. Both these phrases were used, to distinguish those built by SAP and those not built by SAP. It's clear that this distinction was actually irrelevant, in the light of what defines a Fiori app. You can say many things about Fiori apps, but certainly not "they must be built by developers at SAP".
With the SAP Fiori Design Guidelines and a nice hot cup of tea, you have everything you need to design and build your own Fiori apps.
This is not quite correct. The Fiori concept lives at the User Experience (UX) level, and in theory you can create a Fiori app, that is constrained and informed by the Design Guidelines, with any technology.
Once you descend to the User Interface (UI) level - where the rubber meets the road, so to speak - you can choose to develop your Fiori apps in any editor, with any workflow, using any toolkit or framework (and yes, sometimes I develop Fiori apps using vim on my antique Wyse serial terminal). But the end result must conform to the design language that is Fiori.
While all Fiori apps so far from SAP have been built with this awesome toolkit, of which I am a huge fan, it's not at all true to say that SAPUI5 is a prerequisite for an app to be classed as Fiori.
Look at the recent partnership announcement between SAP and Apple, to develop OS-native Fiori apps for Apple devices. While this is not much of a departure for the Design Guidelines (there are new iOS specific design language elements that incorporate Apple's iOS Human Interface Guidelines), the technology stack for the frontend development is rather different.
One could think about this in terms of OData, but we'll come to that in a second. SAP Gateway is the product name for the OData server implementation for the ABAP stack. And yes, of course there are plenty of Fiori apps that consume OData services that reside on ABAP stack systems. But there are even more Fiori apps that consume OData services that are not served by Gateway - they're served by the Extended Services part of the HANA platform, because those OData services reside directly on the HANA platform. So Gateway is not involved there.
If an app interacts with a backend using another type of protocol (i.e. other than OData), is it Fiori? Well yes. Fiori is about the beautiful swan on the water surface, not about the paddling underneath. There are guidelines internally at SAP that relate to the use of OData, but these are more of a technical nature.
Design Thinking is a very useful step in the Discover phase of any development, Fiori or otherwise. It's cost effective too - any changes at this stage are a lot less expensive to implement than when in the Develop or Deploy phase.
Moreover, while the Fiori Design Guidelines structure themselves around the concepts of personas, roles and tasks, Design Thinking is only one way of determining the input that will influence the outcome of the Design phase. So you don't need Design Thinking on your way to building a Fiori app, but it helps an awful lot.
One of the key design principles described in the Fiori Design Guidelines was "Responsive" (the others are Role-Based, Coherent, Simple and - my favourite - Delightful). This changed recently to "Adaptive".
The subtle difference expresses the point that while many Fiori apps are designed to work across all devices, some, from a practical perspective, are really not suited. Look at some of the SAP S/4HANA Fiori apps, especially those that present a complex grid view of information, and you'll understand why. So yes, while many Fiori apps are mobile-ready, not all of them are designed to be.
The SAP Fiori Launchpad is an important component in the complete Fiori experience. It's the starting point for users in many cases, is available on-premise and in the cloud, and is much more than simply a menu of options. But the idea of Fiori exists above any one component, and the very fact that you can set up Fiori apps to be launched in "standalone mode", i.e. without the need to access it from the Launchpad, shows us that the Launchpad is not essential.
Take this one step further - wrapping a Fiori app with Kapsel for a hybrid experience on a mobile device - and again, you have a Fiori app, but no Launchpad.
Let's look at this from a practical point of view, and how Fiori apps are most commonly built today. Of course, more skill and experience is almost always welcome, but when building Fiori apps, what's essential? Yes, knowledge of HTML5 (HTML, CSS and JavaScript) is very important, but the importance of those "raw" skills pales into insignificance compared to the importance of knowing the toolkit that SAP use to build their Fiori apps, and that we do too - SAPUI5.
SAPUI5 is an abstraction level above HTML5, and while it would be foolish to try and wield SAPUI5 without knowing about HTML5, the key skills you should be looking for in a potential frontend developer are those pertaining to SAPUI5.
This is a misunderstanding that can be costly. Developing in ABAP means working inside the confines of the ABAP stack, the R/3 architecture that is the basis for your ECC, CRM and SRM systems to name a few. Within these walls, the complete developer workflow is defined and encoded in concrete. Code completion, syntax checking, unit testing, version control and software logistics - everything is handled for you (some would say "done to you"!) on the ABAP stack.
This will continue for the ABAP-based OData services you might want to build. But in the non-proprietary world outside, the choice of tools and developer workflows can be bewildering. You have all the advantages and all the disadvantages that total freedom brings. Make sure you have all bases covered.
Just to qualify this statement somewhat - of course, in an ABAP stack environment, it's likely that there will be ABAP development for the backend part of a Fiori app. But here I'm talking about the assumption that ABAP developers can't make good frontend Fiori developers - can't transition to the skillsets and technologies required to build outside-in apps that run in browsers.
That's nonsense. For me, there's no such thing as "an ABAP developer". Yes, there are developers that write ABAP, but restricting oneself to a single language is limiting in all sorts of ways. A good developer learns new languages and techniques to stay sharp. The skills, discipline and business knowledge that a good ABAP developer has are more than translatable to the newer world of Fiori development.
Not everyone will want to transition, and a "full-stack" developer is neither better nor worse than developers separately focused on backend and frontend.
But, heck - I started my SAP career writing SAP applications in mainframe assembler, then ABAP, and now JavaScript & SAPUI5. If I can do it, what's stopping you?
Prev: FOFP 1.5 Creating functions
We've already seen our first higher-order function, map
, in action. A close sibling is filter
.
filter
has the same pattern as map
, in that it works on an Array, and applies the supplied function to each element of that Array in turn. In this case though, the function acts as a predicate, and only those elements for which the predicate evaluates to true are kept. The others are discarded, leaving you with a shorter collection.
Let's explore with a simple example based on our short list of numbers:
var nums = [1, 2, 3, 4, 5];
If we were only interested in the odd numbers, we could do this:
nums.filter(function(x) { return x % 2 !== 0; })
// [1, 3, 5]
Pretty simple. Like we did with map
, we could use a pre-defined (i.e. named) function instead:
function is_odd(x) { return x % 2 !== 0; }
nums.filter(is_odd)
// [1, 3, 5]
Notice that the program is starting to become easier to read, the more we move away from the mechanical nature of the imperative style of programming towards a more declarative style.
And it doesn't end there. If we wanted to take the numbers we'd filtered our list down to (1, 3 and 5) and transform them, all we'd need to do is chain calls together ... remember that map
and filter
both consume and produce lists. Remembering our times
function from FOFP 1.5, we could form a chain like this:
nums.filter(is_odd)
.map(times(4))
// [4, 12, 20]
Now that we've seen map
and filter
, it's time to have a look at their somewhat more powerful sibling, fold
.
I'm writing it in draft mode in public, as a sort of experiment. Feedback gratefully accepted!
Part 1 - First-class functions, list processing and higher-order functions
Part 2 - More higher order functions
Transforming with fold (aka reduce)
Part 3 - Different syntaxes and languages
JavaScript and ES6
Fat arrows and more concise function definitions
Our examples in Clojure
Part 4 - Classic list processing
(To cover lists, head+tail / first+rest and recursion)
]]>Prev: FOFP 1.4 A different approach with map
In our previous example, we defined a helper function square
and used it like this:
function square(x) {
return x * x;
}
nums.map(square)
// [1, 4, 9, 16, 25]
Let's go one step further and write something that will produce helper functions for us. We'll move away from squaring numbers, but stay on the simple theme of increasing integers.
Consider this:
function times(n) {
return function(x) {
return x * n;
}
}
What's going on? We have a function, which is returning a function. Yes, it's that higher-order nature again. Here we're defining a function that takes a multiple n
. The scope defined by that function's block (the outermost {...}
) closes over the value passed for n
, creating a so-called "closure", with the value forged into the inner function that's returned.
The inner function also expects a value x
that will be multiplied by that value of n
.
Let's have a look at how we might use that:
var double = times(2);
var triple = times(3);
So take triple
. What is it that we have, as a result of calling times(3)
? Well, we have a function expecting an argument:
triple(6)
// 18
So really, with times
, we have a function, that takes a value, and produces a function, that takes a value, that produces a value. If you're familiar with type signatures at all, for example from Haskell, you'd represent this like so:
f :: a -> (a -> a)
or simply:
f :: a -> a -> a
You could even use times
like this:
times(3)(4)
// 12
In some ways, our simple times
function embodies some of the essence of partial application [4.12.1.4]. Calling times
with a single argument:
times(3)
is a partial application, and results in a function which is waiting for the second argument:
<function-produced-by-times(3)>(4)
Now we have everything we need to use map
to process our list, with a helper function:
nums.map(times(2))
// [2, 4, 8, 8, 10]
Neat!
]]>Prev: FOFP 1.3 Some basic list processing
In our second attempt at basic list processing, we used the Array object's push
function. There are other functions that operate on Array objects like our nums
list. JavaScript has a set of functions that are often talked about together, and which take us into the realms of functional programming.
These functions are map
, filter
and reduce
.
They're known as "higher-order functions", because they take functions as arguments - elevating functions to being first-class, as we discussed earlier.
Let's start with map
, and see how we might improve upon our earlier attempts. The map
function operates on an Array, and takes a function. It iterates over the elements of the Array, and for each of those elements, it calls the supplied function, passing the element. It builds a new Array, with the results of these calls, leaving the original Array unchanged.
Think of it as "mapping" the function over the elements of the list.
Here's an example:
squares = nums.map(function(x) {
return x * x;
});
// [1, 4, 9, 16, 25]
That's rather neat! Much less mechanical, and no helper variables in sight. And we can re-run this as many times as we like, with no fear of nums
being mutated, or data "growing" inside squares
.
Let's have a look at the argument passed to map
, inside this bit:
nums.map(...);
It's a function. An anonymous function, created in-line within that call:
function(x) {
return x * x;
}
This is a common pattern. You could also define a named function, and then use that named function, like this:
function square(x) {
return x * x;
}
nums.map(square)
// [1, 4, 9, 16, 25]
So we have map
, a higher-order function, treating functions like our anonymous one (and its equivalent named function square
) as first-class objects[^1].
You may be familiar with the Unix approach of small programs each focusing on doing one task, and being joined together in a data processing pipeline. If you are, you might see the beginnings of a similar possibility here.
Notice that map
just produces a new Array, for you to look at, catch and store in a variable, or even allow to fall to the floor and disappear. So we could just as easily feed the output of that map
into the input of another function that worked on Arrays - perhaps one of map
's siblings filter
or reduce
. We'll take a look at that later.
Next: FOFP 1.5 Creating functions
[^1]: This is "objects" with no object-oriented nuances. Simply "things".
]]>Prev: FOFP 1.2 Trying things out
Let's explore the difference between imperative and functional programming approaches with the simple processing of a list of integers 1, 2, 3, 4, 5. We want to turn them into their "squared" equivalents 1, 4, 9, 16, 25.
Create a list of integers, using the array literal syntax, like this:
var nums = [1, 2, 3, 4, 5];
A typical imperative approach to creating the squares might look like this:
var i;
for (i = 0; i < nums.length; i++) {
nums[i] = nums[i] * nums[i];
}
// 25
This pattern is very familiar. And it's very mechanical. We're giving very precise instructions on how to achieve the goal.
There's nothing wrong with that per se. It's just a little, well, mechanical. And even in this trivial example, there are a number of things that will tax us:
we are iterating through the list of integers in nums
using an array index lookup. For that we need to declare and maintain a variable i
, initialising it to zero at the outset (i = 0
), and incrementing it by one each time around the loop (i++
). So we have to keep that state in our head as we read, or (worse) want to modify that code.
we have to address the number of items in the list (nums.length
) explicitly, so as to be able to finish the looping when we reach the end of the list.
inside the loop, we have to use the array index explicitly ([i]
) each time we want to refer to the value of the list item currently being processed. This just adds to the cognitive noise that we have to deal with, on top of remembering that i
is changing each time.
The for
statement actually evaluates to something, which we see here is 25 - the last value computed inside the block. Sort of makes sense, but only a little.
So after executing this, we have what (we think) we wanted:
nums
// [1, 4, 9, 16, 25]
But perhaps the biggest problem is that if we run this a second time, we don't get the same result:
var i;
for (i = 0; i < nums.length; i++) {
nums[i] = nums[i] * nums[i];
}
// 625
625? What's going on? Well notice that we're mutating values inside the nums
list. So after the first time, the values inside nums
are the squares, i.e. 1, 4, 9, 16 and 25. So when we run it again, we're squaring those values, with these results:
nums
// [1, 16, 81, 256, 625]
Ouch.
Because state is being mutated, the program becomes harder to follow, harder to reason about.
So let's have another crack at this. Instead of mutating the values inside num
, we'll produce the output in another list, and keep the original list untouched. Before we start, let's put our input back to what it was:
var nums = [1, 2, 3, 4, 5];
Now we'll create a new empty array squares
, and push each square value into that inside the loop:
var i;
var squares = [];
for (i = 0; i < nums.length; i++) {
squares.push(nums[i] * nums[i]);
}
// 5
Those eagle-eyed readers among you will perhaps be wondering about the value 5
here. It's not the same as what we had earlier. But it's consistent, in that it's the value of the last-executed statement inside the loop. Before, that was the result of a multiplication. Here, it's the result of a call to push
, which returns the new length of the array being operated upon.
Anyway, after execution, nums
is still what it was, and the output values are now to be found in squares
:
squares
// [1, 4, 9, 16, 25]
That's an improvement. We have to be a bit careful if we want to re-run the code, because we need to make sure we include the initialising of the squares
array before the loop, so as not to end up with this situation:
squares
// [1, 4, 9, 16, 25, 1, 4, 9, 16, 25]
But the improvement comes at a cost - yet more stuff to hold in your head, this time about the squares
array.
Prev: FOFP 1.1 Introduction
To start exploring some of the fundamental concepts of functional programming, you don't need anything more than you've probably already got. Of course, there are "more" functional languages such as Haskell, Standard ML, and various dialects of Lisp, such as Scheme, Common Lisp and Clojure. But there's a language that's pretty ubiquitous and that has some very good support for core functional programming concepts.
That language is JavaScript, and it's everywhere because it's available in all the major browsers. It's likely that you have a browser on your PC or laptop, so let's see how you can get started immediately with a simple interactive environment in which we can experiment. We'll choose the Chrome browser, not because it's fast or standards compliant, or even because it's from Google, but because it has a super set of developer tools that is worth getting to know.
One of those developer tools is the console - where you can enter JavaScript and have it executed immediately. This concept of a feedback loop made out of an interactive prompt with immediate execution is commonly known as a REPL, which stands for Read, Evaluate, Print, Loop: It reads your input, evaluates it, prints the result of the evaluation and then loops around to read your next input.
Open up Google Chrome, and in a new tab, open up the Developer Tools using either the menu as shown, or using Ctrl-Shift-I (or Cmd-Shift-I on a Mac), or F12.
You'll see something like this:
The developer tools have opened up next to the tab you're on. Choose the "Console" entry in the menu at the top, to switch to the console, or REPL. You may see some error messages relating to the tab that's open (even a simple "new" tab), but you can ignore them[^2]. You might also want to detach the developer tools using the "Dock side" option (press the three vertical dots to get this menu) - choose the double-pane icon "undock into separate window".
At this stage, you're ready to explore.
Next: FOFP 1.3 Some basic list processing
[^2]: You can clear the console, and therefore remove the errors, with Ctrl-L (or Cmd-L on a Mac).
]]>This document introduces some fundamental building blocks in the functional programming world.
Just so we start out on the same page, let's come up with a working definition of what functional programming is. It's a style of programming - a programming paradigm, where computation is brought about by the evaluation of functions. There's also an emphasis on immutability which means that changing state is positively discouraged. While some programming languages are imperative ("do this, do that"), functional programming can be seen as declarative, with expressions, rather than statements, being key building blocks in the programs you write.
There are languages that are designed to be entirely functional, such as Haskell, and languages that are "multi-paradigm", such as Python. Many of these multi-paradigm languages support functional programming concepts. Even languages that are traditionally and strongly object-oriented (another programming paradigm) are exploring the functional space, such as Java, with the advent of lambda expressions in Java 8.
As you might guess, functions are a key component of a language that supports functional programming. For now, let's think of a function as simply a mechanism that takes an input value and produces an output value. If we had a function that doubled a number, we'd describe it generally like this:
f : a -> b
where you would pass a value represented by a
to a function f
, which would produce value b
as a result. In other words, when f
is applied to a
, it produces b
. This is known as function application [4.12.1.3].
We'll come back to functions in a second. Let's look at other key components that make up the fundamental building blocks of a language.
In many languages you'll find integers, floating point numbers, characters and strings[^1], that we use to hold values in our programs. These simple, single types are sometimes known as scalar values - single units of data. But there are also structures that contain multiple values. Typical structures here are lists, also known as arrays or vectors, and maps, also known as hashes, or associative arrays, containing pairs of names and values.
All these types are known as being first-class, meaning that they can be used as input to, and be produced as output from, functions.
In functional programming, functions are first-class too. This means that functions can also be passed as input to other functions, and that functions can produce other functions as output [4.12.1.2].
Next: Trying things out
[^1]: Some might argue that strings are not scalar, but complex structures. But that's for another time.
]]>I solved 4Clojure puzzle 100 (Least Common Multiple) with this code:
(fn [& args]
(let [[x & xs] (reverse (sort args))
are-divisors? (fn [n] (zero? (reduce #(+ %1 (mod n %2)) 0 xs)))]
(->> (iterate (partial + x) x)
(filter are-divisors?)
first)))
I'm not a mathematician, so forgive me, but my approach to the solution was to take the largest of the numbers supplied, and build a lazy sequence of its multiples, (e.g. starting with 7 it would be 7, 14, 21, 28 etc). The first number in that sequence that had the rest of the numbers as factors was the answer.
Expressing that in Clojure, I first marshalled the input and prepared a function that I could use in the main part of the resolution. In the let
binding, I split the input numbers into a single scalar - the greatest of them - and a sequence of the rest of them. Then I defined a function on the fly to close over that "rest" sequence (represented by the xs
var) and determined whether those numbers were divisors of a given number n
.
Looking in detail at this function, here's what I was expressing:
(reduce ... 0 xs)
n
divided by that numberreduce
, add up the total of the modulo results: #(+ %1 (mod n %2))
(zero? ...)
Neat enough, I thought.
But the nature of this is slightly mechanical ... I wanted to know whether every number was a divisor, and did that with maths (deriving a modulo total and checking for zero). So while I was doing well, I didn't entirely Say What I Mean (SWIM).
Looking at someone else's solution, I discovered the predicate function every?
that was perfect, and would allow me to SWIM better.
Here's my definition:
(fn [n] (zero? (reduce #(+ %1 (mod n %2)) 0 xs)))
and here's a version using every?
:
(fn [n] (every? #(zero? (mod n %)) xs))
Yes, it's shorter, which is nice, but the difference is striking. With this version, I'm now saying: "is every modulo of n
, and the numbers under test, zero?"
One small step closer to a more natural ability to Say What I Mean.
]]>Pretty much at random, I picked the wonderland-number puzzle where you have to find number with particular properties. In a way, the puzzle is similar to the ones you can find on Project Euler.
The problem statement is simple. It's about finding a Cyclic number, thus:
As I'm tired, it was quite nice to be able to apply the philosophy of building up from small blocks to reach the solution. So, here goes.
Step 1 - Getting the digits of a number
We're going to be comparing digits of a number, so let's have a function that will return a sequence of digits for a given number:
(defn digits [n] (map #(- (int %) (int \0)) (str n)))
The str
function calls .toString
on its argument, here turning a number into a string, and therefore, more importantly, a sequence that we can map
over.
The anonymous function we're using in the map simply converts the char value of each of the string characters to their numeric equivalents. (I do find converting a string representing a digit to its numeric value equivalent a little clunky in Clojure, having a background in scripting languages that make that more seamless. Perhaps I'm missing something. But I digress.)
Let's try it out:
scratchpad.core=> (digits 12401)
(1 2 4 0 1)
Step 2 - A unique set of digits
We actually want a unique set of digits, so we can better compare them:
(def digit-set (comp set digits))
Simply composing the function set
with our new digits
function does the trick.
Let's try it out:
scratchpad.core=> (digit-set 12401)
#{0 1 2 4}
Step 3 - Multiple results
So now we want to generate the list of results of multiplying the number under test with 2, 3, 4, 5 and 6. We want those results as digit sets. Here goes:
(defn mult-result [n] (map #(digit-set (* n %)) (range 2 7)))
All we're doing is folding (with map
) an anonymous function over the range
of "multiplier" numbers 2 through 6 inclusive. And this anonymous function multiplies the number under test with the particular multiplier being folded over, and produces a digit set from the result.
Let's try it out:
scratchpad.core=> (mult-result 123456)
(#{1 2 4 6 9} #{0 3 6 7 8} #{2 3 4 8 9} #{0 1 2 6 7 8} #{0 3 4 6 7})
Step 4 - Checking the digits are the same
The last thing we have to do is check whether the digits are the same in each of the multiplier cases.
(defn same-digits? [n] (apply = (mult-result n)))
Using apply
with a function allows that function to be used with the contents of the sequence supplied, rather than with the sequence itself. So the =
function operates on the multiple arguments that are the elements of the sequence produced by (mult-result n)
. The function name ends with a question mark in the tradition for Clojure predicate functions that return true or false.
Let's try it out:
scratchpad.core=> (same-digits? 123456)
false
Step 5 - Profit
Now we have all we need, and can use the same-digits?
function as a predicate in calling filter
on the six digit numbers:
scratchpad.core=> (first (filter same-digits? (range 100000 1000000)))
142857
Result!
So there are undoubtedly better ways of approaching this puzzle, but I wanted to illustrate the bottom-up approach of computing that Clojure, and functional programming in general lends itself rather well to. And on the occasions when you're tired and can only think in small chunks, it's ideal :-)
]]>It is better to have 100 functions operate on one data structure than 10 functions on 10 data structure
This makes a lot of sense. But it also is clear that the language, as a set of functions and features, is large. Of course, at a low level, the language is very small; but the layers that have been built to operate on data structures have a depth that I haven't yet mastered.
It's not a case of the layers or functions being too complicated ... rather, I just haven't discovered everything that's possible yet. And when I haven't, I am resorting to mechanical solutions. I suppose this is simply a part of the journey, and while building a mechanical solution to a problem is irksome, it's educational, especially when you are shown something so much more succinct.
An example
Here's one example, a solution to 4Clojure problem 63 "Group a Sequence". A fairly straightforward challenge, but one that I couldn't see an obviously neat way of solving. (Note that the rules prevented the use of the group-by
function, with which it would have been a cinch to solve, of course!).
A clean but mechanical approach
Here's what I ended up with:
(fn p63 [f xs]
(loop [elements xs
result {}]
(if (empty? elements)
result
(let [element (first elements)
value (f element)
values (or (result value) [])]
(recur (rest elements)
(assoc result value (conj values element)))))))
In one way, I'm happy, because it's using the loop/recur
construction (tail recursion idiom), with the "first/rest" pattern, and it's not mutating any state. And I typed this in directly and it solved the puzzle first time :-)
But there's a mechanical nature to it. Here's what it does, generally:
loop
with the elements given, and an empty result maplet
binding it simply recur
s with the rest
of the elements (all but the first
), setting the value for the result
var to be that plus the addition of the calculated value in the right place in the mapA neater approach
Here's the solution from another 4Clojure user that I'm following (and I am learning a great deal from them, whoever they are!):
(fn [f s]
(apply merge-with concat (map #(hash-map (f %1) [%1]) s)))
Wow. The power of this solution, and the secret of its brevity, is the merge-with
function, which is documented thus:
Returns a map that consists of the rest of the maps conj-ed onto the first. If a key occurs in more than one map, the mapping(s) from the latter (left-to-right) will be combined with the mapping in the result by calling (f val-in-result val-in-latter).
This was exactly the right thing. The (map #(hash-map (f %1) [%1]) s)
form simply returned a flat list of hash-maps with the keys being the result of applying the given function to the element, and the values being the elements themselves. Beautifully simple, in the philosophy of focusing on performing just one task.
And then the myriad hash-maps were gathered together with merge-with
using the concat
function to resolve same-key clashes (in other words, "just group them together").
Taking the first of the puzzle's unit tests as an example, here's what stage one (pre merge-with
) looks like. This:
((fn [f s] (map #(hash-map (f %1) [%1]) s)) #(> % 5) [1 3 6 8])
produces this:
({false [1]} {false [3]} {true [6]} {true [8]})
Then applying the merge-with concat
we get the result:
{false (1 3) true (6 8)}
Lovely. I'm still on my journey to enlightenment, and am enjoying learning about functions such as merge-with
on the way.
Weāre into week 7, the final week of this openSAP course āBuild Your Own SAP Fiori App in the Cloud ā 2016 Editionā. As weāre in the midst of building our app for the Develop Challenge, this final week is deliberately short, with only two units.
Unit 1 āEnd-to-End Development Scenarioā. If youāve seen a demo of the SAP Web IDE before, in particular for generating and subsequently editing an app from a template, youāll be already familiar with a lot of the content of this unit. Iām all for repeating information and demos for learning and for strengthening the neurons, but I honestly didnāt find anything significantlyĀ new here. I think perhaps the intention is to show a final end-to-end scenario, where each course participant should now be comfortable with the details and nuances for each part along the way.
There was one part which touched on some of the features of the git functionality in the Web IDE, along with a brief view of how that then is exposed in the SAP HANA Cloud Platform, but Iād like to have seen this in the context of a non-master branch.
Unit 2 āEnd-to-End Administration Scenarioā.Ā In many ways, this is the other side of the coin to the development of UI5-powered apps for Fiori scenarios. While Unit 1 covers this development, this unit briefly covers whatās possible in the context of the setting for these apps ā the SAP Fiori Launchpad. Specifically, this is for the cloud-based Launchpad, as provided by the SAP HANA Cloud Portal services of the HANA Cloud Platform .
Itās a shame that the content of this unit is out of date, at least visually.
The SAP Fiori configuration cockpit changed a while ago, and looks nothing like whatās shown in this unit. There was a brief disclaimer message during the video, but that doesnāt really help that much. That said, the actual functionalityĀ has not changed much, and with the availability of the cloud-based Launchpad in the HCP trial accounts, itās quite easy for you toĀ explore itĀ yourself.
In fact, because thereĀ are some complex relationships possible between the building blocks such as tiles, groups, catalogues and role, itās better anyway to have a play around and try to get something working that makes sense to you. This is a great example of where theory is not everything ā getting the mechanisms under your fingernails and the ideas embedded into your understanding is key here.
One feature that was highlighted was the ādynamicā tile type. This is close to my heart, especially in the light of our upcoming lunchtime webinar on 26 Apr:
The SAP Fiori Launchpad as a Human-Centric Dashboard
where we explore the possibilities that are presented to us by the SAP Fiori Launchpad and its features such as the different tile types.
Finally, there was a nice touch after the instructor added the details for the dynamic tile ā specifically the Number Unit value āNotebooksā:
āWait!ā I hear you say. āThatās static text ā what about consuming this in different languages?ā.
And the openSAP courseĀ folks must have pre-empted that thought. Directly following this was a short section on the Translations service within the configuration cockpit. In a similar way to how you handle internationalisation (i18n) resources for a Fiori app using the UI5 detection and resourcing mechanisms, so you also can manage property files of name/value pairs for text elements.
Hereās a screenshot of what this looks like, slightly beyond where this unitās video ended:
Here, Iāve added and activated a German locale for my Launchpad, which means Iād see the translated texts when consuming the Launchpad site in German, say, by adding the sap-language=de parameter in the URL.
Anyway, as I mentioned earlier ā the best way for you to learn more is to get going on your trial account and play around. Have fun!
One of the nice things about 4Clojure is that you can "follow" other users, the practical upshot of which is that when you provide a correct solution for a given puzzle, you can look at the solutions from your followers too. I'm following five users, and often their solutions are delightfully different to mine - sometimes simpler, sometimes more elegant, sometimes using an approach I'd never thought of, and sometimes all of the above.
Learning by Doing
I gave a talk last month at the Manchester Lambda Lounge. It was titled "Learning by Doing - Beginning Clojure by Solving Puzzles". I talked through my approaches to solving a few puzzles, sharing my thought processes with the other members of the group. It was fun, and educational - certainly for me!
The theme running through the talk turned out to be "everything is a list". There's a lot to say on this, but I'll limit it here to suggest that in building solutions, it's possible to think in terms of lists, of sequences, and functions that operate thereon. Intertwined with this was my attempt to not mutate any state, and not to approach problems mechanically ... avoiding the how, and focusing on the what.
4Clojure 66
So here's my approach to solving the 4Clojure problem number 66 "Greatest Common Divisor". Please bear in mind it's not the most efficient or elegant. I just wanted to share my thinking. It's the sort of thing I'd like to read if I was exploring a new language, to see different possible ways of thinking computationally.
You can read the puzzle statement over on the 4Clojure site. One of the test cases looks like this:
(= (__ 1023 858) 33)
We'll use this as a basis for our direction. In this test case, as in all of them, we need to define a function that will sit where the __
placeholder is, so that the whole expression, or form, is true. So we need a function that takes two arguments (1023 and 858) and returns 33 as the greatest common divisor.
Where we'll get to
Here's the complete solution which we'll be working our way towards:
(fn [& args]
(letfn [(common-div [i] (zero? (reduce + (map #(mod % i) args))))]
(->> (range (apply min args) 0 -1)
(filter common-div)
first)))
A helper function
Breaking the problem down, it would be good to have a function that told me whether a given number was a divisor of some other numbers. So in a letfn
binding I defined a function common-div
which did exactly that. The function was defined to close over the args
to the main (outer) function, i.e. in this particular test case, 1023 and 858.
This common-div
function works out whether the number supplied, i
, divides evenly into the numbers in args
. It does this by mapping an anonymous function #(mod % i)
over the args
. This anonymous function returns the modulo, or remainder, of dividing the number(s) by i
. If the numbers are all evenly divisible, then this should produce a list of zeros, like this:
scratchpad.core=> (def args [1023 858])
#'scratchpad.core/args
scratchpad.core=> (def i 3)
#'scratchpad.core/i
scratchpad.core=> (map #(mod % i) args)
(0 0)
And folding over this list of remainders, using reduce
, with the addition function, should produce zero, if i
is a common divisor:
scratchpad.core=> (reduce + (map #(mod % i) args))
0
Finding the answer
Now we have such a helper function, we can rattle through the puzzle. Inside our letfn
binding we start with a threading macro (->>
) which simply allows us to write a sequence of functions in a way that's arguably more readable. What we want to 'thread' is a list of numbers, ranging from the lower of the two args
down to 1 inclusive. So in this case we want a range from 858 to 1.
scratchpad.core=> (range (apply min args) 0 -1)
(858 857 856 ... 1)
The apply
here is doing a similar thing to what it does in JavaScript. If we called min
with the args
directly, we'd get this:
scratchpad.core=> (min args)
[1023 858]
because min treats args
as a single entity (the list of two numbers ... actually a vector in this case). The function apply
applies the given function to the content of the list, breaking them out of that list:
scratchpad.core=> (apply min args)
858
This list produced by range
is passed into the filter
function which is using our common-div
function we defined earlier, which should result in a much shorter list of those numbers that divide evenly into the two args
, i.e. (33 11 3 1)
.
And because we're working backwards down to 1, the first number in this filtered list we come across is the one we want: 33. Bingo!
Final thoughts
As I mentioned at the outset, this is not necessarily the most efficient solution. But it shows that you can think in terms of lists, with logic that doesn't require any mutation of state. It becomes simply an expression that evaluates to an answer. It's a different way of thinking, but I like it very much.
]]>Well weāre on to the penultimate week of learnings, in the midst of the Design ChallengeĀ Peer Review due in at the end of this week, and at the start of the Develop Challenge. Phew! Letās take a look at the units this week.
Unit 1 āIntroduction to SAP Fiori Extensibility with SAPUI5ā³. This was quite a good overview of the different extension capabilities with SAP Fiori. Itās an introduction, so I didnātĀ expect to get a deep dive, but in fact the presentation of the extension concept, within the time and slide contents constraints, was a good one. It explainedĀ the way that an extension project starts out lean, a ādeltaā with the parent (or āoriginalā) application, and then builds, as extensions are added. The slideware was good too ā a clear and meaningful build up of the Component.js file.
While the extension concept of the SAPUI5 toolkit supports the core extension capabilities at the developer level, it was also interesting to see the Runtime AdaptationĀ classed, along with general user-level customising possibilities, within the general extension umbrella. And rightly so. The Runtime Adaptation is quite an achievement; while still a relatively young concept and section of the SAPUI5 toolkit, itās definitely worth having a look. You could almost see it as āPersonas for Fioriā. Now how does that mess with your current pigeon-holing of tactical and strategic UX technologies? :-)
There were a couple of things that I wanted to draw your attention to with the extension concept, lest misunderstanding were to take hold:
Caveat developer!
Unit 2Ā āExtensibility with SAP Web IDE ā SAP Fiori Cloud Exampleā. This unit is definitely worthwhile and a very quick walkthrough of what you can do yourself too. Last year at SAP TechEd EMEA, I was lucky enough to co-present a number of SAP Fiori related workshops (see āSpeaking at SAP TechEd EMEA 2015ā). One of them was to have the participants walk through extending an app in the Fiori Cloud Edition, just like the instructor did in this unit. You can do it yourself too now ā just visit the SAP Fiori Demo Cloud Edition home and follow your nose. Thereās also an exercise following this unit which will take you through something similar.
With extensions, we have many questions to answer. Why, how, what and where, for starters. The āwhyā is simple ā because you want to have the app meet your specific business needs and nuances. The āhowā is what this unitĀ covers at a high level:Ā the extension concept in general, along with the great support in the SAP Web IDE.
But itās the āwhatā and āwhereā that will most likely cause the new SAP Fiori developer to scratch their head. What can I extend, and where do I find it?Ā Well this is partially answered in the details section of each SAP Fiori app in the Fiori Apps Library. But knowing how an app is generally structured, and knowing specifically how your particular app is structured, is a level of detail and understanding much deeper than that.
And what, I hear you say, is the meaning of the S2, S3 and S4 view names? These are artifacts of the original SAP Fiori development approach within SAP, with S2 being the master view of a Master-Detail application, S3 being the detail, and so on. Getting inside the mind and the development context of the developer(s) that wrote the code youāre trying to modifyĀ (whether thatās SAP Fiori or something completely different) is something you should try to do as it will make a big difference.
Oh yes, one more thing ā the instructor added anĀ event for the camera button he added to the S4 view. The event he chose from the list was ātapā, which was right next to āpressā in the list. Unfortunately the Button controlās ātapā event has been deprecatedĀ since 1.20 (in favour of the āpressā event), but as (a) the SAP Web IDE didnāt highlight this (yet!) and (b) the clock on the instructorās screen showed just after 5 oāclock in the morning, we can overlook this ;-)
Unit 3Ā āIntroduction to Enabling SAP Fiori for Mobileā.Ā Thereās a ton of stuff that SAP (and Sybase) have developed in the area of mobile app creation, deployment and management. Iām sure Iām not the only one somewhat dazzled by the oncoming headlights of so much traffic in this area. So if for no other reason than to summarise where SAP stands today with respect to their direction and strategy in this area, this unit serves us well.
In a conversation last month, I was rather surprised to hear that there were some people who had not heard of the term āhybridā in the context of mobile apps.Ā This unit clears that up for folks, and explains what hybrid means. This word for me will forever have one of its original meanings from its Greek roots (į½Ī²ĻĪ¹Ļ) ā where it was used to describe the mythical Chimaera, made up of three different species of animal ā and was an insult (think āhubrisā) to each one. I wonder if SAP considered this in the Hybris context?
Anyway, the unitĀ actuallyĀ covers a lot of ground, at a high level, contextualising SAP Mobile Services (SAPms), Apache Cordova (nee PhoneGap), Kapsel, the SAP Fiori Client and much more besides. This is the sort of unit where youāll want to review the slides again later to make sure youāve built the right set of pigeonholes in your brain to store the flood of information that you know is going to come your way.
OneĀ concept that was mentioned but never really expanded upon was the offline OData feature. That alone perhaps could have taken up a whole unit, a whole week, or (in depth) a whole course, but it would have been good for the participants to learn, even at a high level, what was possible.
Unit 4Ā āExtending an SAP Fiori App for Mobile ā Use Caseā. Enough theory, how about some practical demonstration? And this unit delivers that. It continues the app from Unit 2 earlier in the week, and adds code to the event handler created then. Itās sometimes difficult to keep up with dry material, so to see something in action is a nice diversion. It also shows that not everything is always perfect ā even the happy path that the demonstration was following was marred slightly by some network issues (it looked like the mobile device momentarily dropped off the wifi network).
It would have been nice to dig a little deeper into the background behind the Hybrid Application Toolkit (HAT) settings, especially the connection between the workstation-local resources and the configuration in these settings (see screenshot).
Iām thinking, however, that this is covered in the companion course āDeveloping Mobile Apps with SAP HANA Cloud Platformā.
Unit 5Ā āCloud Extensions with SAP SuccessFactorsā. If there ever was a unit that was full to bursting with a demo that makes you want to know more about absolutely everything, this unit comes close. Even at the longer length of 25 mins, this unitās video only managed to scratch the surface of extending SAP SuccessFactors with an embedded UI5 app.
Looking past the clear signs of multiple SAP teams working on different parts of SAPās cloud strategy as a whole, I canāt help but marvel at where theyāve got to. Yes, a pedant like me can spot some inconsistencies and things that donāt look right to the eye. There are also many questions relating to how things really connect and are authenticated, but on the whole, it was a good illustration of some of the core building blocks of the PaaS that HCP is.
Iām really glad that the example the instructor chose was consuming an OData service, via the Destinations facility of HCP. The backend exposing this OData service was marked as āodata_genā ā just like youād mark any non-SAP (AS ABAP stack) OData service like Northwind.
I did wonder somewhat about the use of theĀ URL in the Widget specification ā it was supposed to be the URL of the app on HCP, but instead was another one with ādemo2ā³ in the first part. Ah well, letās put that down to slightly disjointed end-to-end demos.
Of course, like before, this packed unit was really just a taster for a full blown openSAP course. This time itās āExtending SAP Products with SAP HANA Cloud Platformā.
Finally, did you notice the use of OpenSocial? Hereās another example of SAP adopting open protocols and standards, like HTTP and OData. OpenSocial defines a component hosting environment and a set of APIs, and SAP use it to enable the mashup of different components (such as extensions) to run within the same document object model (web page). This continues to give me confidence and encourages me to invest time and effort learning about the technologies and uses thereof in this space, as I know Iām less likely to be going down a path that leads nowhere.
Well, thatās the last main unit of this week. There was, as usual, the video blog update for this week, which I enjoyed. What struck me was the necessity to lead the course participants through architectural structures such as the one described in this diagram:
The cloud, with SaaS, PaaS and more, does mean that the on-premise landscape is simplified, but doesnāt mean that the architectures in general are simplified. Far from it. With tenants, accounts, trial and production platforms in the cloud, and connections to on-premise systems and even other cloud systems, itās only going to get more complex.
And the final word goes to Bob, in the video update: āmake sure you continue using Google Chromeā. What a refreshing change to SAPās requirement from the bad old days, when it explicitly required you to use that disaster of a āweb browserā, Internet Explorer. Onwards and upwards!
]]>So week 5 came and went rather quickly ā perhaps it was the Bank Holiday weekend! This week sees the introduction of the Develop Challenge, as well as the main content of this week āĀ looking at more advanced features of the SAP Web IDE. Letās reviewĀ the units.
Unit 1 āEnhancing Your SAP Fiori App with the Layout Editorā. This was a good introduction to the Layout Editor ā an alternative way to edit Fiori app views. The Layout Editor is an accomplished piece of software, especially when you consider itās running in the browser. I wouldnāt say it is my first choice for editing view definitions, but for the occasional user, it might be just the right thing. (Thereās an argument that says occasional users shouldnāt be messing around with the innards of Fiori apps, but Iāll leave that for another day).
Itās interesting to remember that one of the reasons why such a thing as the Layout Editor is possible (and indeed the view part of the extension concept). Views in UI5 can be defined imperatively, in JavaScript, or they can be defined declaratively inĀ JSON, HTML or XML. All three of these declarative formats are much easier to parse and manipulate programmatically ā which is what the Layout Editor is doing.
The format of choice for Fiori apps is XML ā I wrote an introduction to XML views, contrasting them to their (then-more-popular) JavaScript equivalents, in a post as part of a series (Mobile Dev Course W3U3 Rewrite) on the SAP Community Network back in 2013:Ā Mobile Dev Course W3U3 Rewrite ā XML Views ā An Intro. If youāre still trying to decide what format to use, XML should be where you start ā simple as that.
Anyway, take a look at this unit to find out more about the Layout Editor. Moreover, if you havenāt used it yourself, thereās a related exercise which goes into great detail ā including data binding. The 34-page exercise document is called āEnhance Your SAP Fiori App with the Layout Editorā and it sits between this unit and the next.
Unit 2Ā āDevelop Challenge: Build Your Own App with Peer Reviewā.Ā Can you say āmeta-courseā? This unit merely covered the details of the Design Challenge. That said, āmerelyā doesnāt really do justice to this unit, or the information imparted.
Thereās a lot packed into this course, including two major hands-on activities ā the Design Challenge and the Develop Challenge, with their respective peer review activities to boot. Iām struggling a little to keepĀ up with whatās required, especially as Iām dipping in and out of the course material when I have time.
So while this unit helps, Iām still a bitĀ confused ā particularly about when the peer review for the Design Challenge is to start. That confusion was increased by the deadline extension given for this challenge (that I mentioned last week). But perhaps Iām just getting old.Ā I checked my Study submitted via Build / Splash, but have had no peer feedback yet. Nor have I had any prompting to start the peer review (we all get 5 submissions to review).
Update ā I did some digging around, and you can now get to the peer review section within the Design Challenge section, as shown here:
So donāt wait for any prompts or emailsĀ ā just go there and start!
Unit 3 āOther Considerations in Building an SAP Fiori Appā.Ā Well this was certainly a challenging unit! Challenging for the audience (it was a sudden slip off the relaxing poolside into the deep water), for the presenter (squeezing that much content into a 15 minute video was clearly a struggle) and challenging for the course as a whole, because while extremely important, it didnāt really fit into the flow of where this course has come from.
As a small example, one of the slides showed a snippet of a key artifact in any Fiori app ā the Component.js file (unfortunately written on the slide as Components.js). To understand the context, the audience would have to have some non-trivial knowledge of UI5 development. This is coming (in a course starting in April, hurray!) but weāre not there yet.
It was like a whirlwind tour of lots of small performance and security related topics, which if done properly, could be expanded into a 3 or 4 day course :-) So in that sense, it was useful to impart. But I wonder how much of that information was really understood?
This unit talked about OData Choreography ā a phrase I like ā but it also had a couple of questionable pieces of advice, at least in my view. If you want only three properties of an entity rather than the three hundred it normally sports, you were advised to create another OData service. Thatās not what Iād do ā rather, Iād use the power of the OData protocol and use the query string option ā$selectā to return just the properties I needed.
I also do baulk somewhat at the recommendation to use $batch. In a TLS (ie HTTPS) context, URI security is fine, and while increasing performance (by batching up multiple requests), batching makes the application mechanics more opaque and difficult to support and debug. The approach also flies in the face of the architectural style that has informed the OData protocol as a whole ā REpresentational State Transfer (REST). Donāt mask resource identifiers (URIs) and hide them where they donāt belong!
Finally, it would have been nice to hear a more compelling explanation of the reasonings behind the āone app / one serviceā rule. But perhaps thatās coming later.
All in all this unit was a useful, if slightly inconguent, poke in the ribs for the attendees to let them know itās not just point-and-click in the Web IDE Layout Editor. I guess my comment about occasional users earlier in this post is relevant here too :-)
Unit 4 āCreating an SAP Fiori App with a Smart Templateā.Ā The combination of OData, annotations, and Smart Templates is a powerful one. We had a brief introduction to Smart Templates in week 3Ā so I need not dwell on them too much again here.
Suffice it to say that this unitĀ showed a pretty impressive demo ā a happy path demo, but a good one nevertheless. One of the things that stood out from the slide notes, and brought up by the presenter, was that with Smart Templates, there are āNO modificationsā. Iāve yet to find out what that really implies; read-only code sometimes goes hand-in hand with auto-generation, but Iām pretty certain that what weāre going to end up with is something workable. The UI5 toolkit itself does a lot of the heavy lifting here, so itās not as if thereās a ton of code thatās being emitted.
Itās definitely an extremely interesting area, and one to watch and learn more about.
Unit 5Ā āIntroduction to SAP Fiori Overview Pageā.Ā Pretty much a continuation of the previous unit, here we move on to another concrete artifact from the āsmartā stable ā the Overview Page, or OVP.
You canāt help but be impressed byĀ andĀ attracted to this lovelyĀ combination ofĀ practical and visually appealing functionality. Itās a cross between the Fiori Launchpad and functions and features of Fiori apps, all on a single page.
But perhaps whatās most impressive is the way that the OVP plugin works and embeds itself seamlessly into the Web IDE. The generation of the core OVP example was impressive, but what really took my fancy was the addition, in the demo, of actual cards, via further wizards. When you consider the different teams that have been involved, itās a great example of the tip of a complex iceberg, both technically and organisationally at SAP. Nice work, teams!
By the way ā if this sort of UI presentation appeals to you, you may be interested in a free 1-hour webinar Iām involved in on 26 Apr. For more information, see the eventās homepage:
The SAP Fiori Launchpad as a Human-Centric Dashboard
and maybe Iāll see you there!
Unit 6 āDeploying Your Appā.Ā This was the last main unit of this weekās course content, and covered, at a high level, what you do next after building your Fiori app. Basically there are a couple of main options ā a deployment to a Fiori Launchpad site on the HCP cloud portal, or a deployment to an ABAP stack backend SAP system. There was another option shown in the slide and in the demo, to āclone to a git repositoryā. I can only think there was a little bit of confusion here ā git clone goes the other way ā creating (pulling) a copy of an existing repository, not instantiating (pushing) a new repository.
But weāll gloss over that for now, especially as the key git parts of the workflow ā when deploying to HCP ā were actually shown. There are some gotchas with the git workflow when it comes to using different branches (which you should be doing during development). You may be interesting in this screencast I recorded showing you the challenge, and the solution: āSAP HCP, git, and Feature Branchesā.
It was good to see the relationship between the app itself, the portal FLP site, HCP along with the roles, even though it was a very quick high level overview. Almost directly afterĀ the video was recorded (mid Dec 2015) some improvements have been madeĀ to this area, notably, with SAP Web IDE release 1.19, the ability to specify which FLP site you want to deploy the app to. You can have multiple FLP sites in your HCP portal account, and the ability to specify the rightĀ target site is key.Ā See āRegistering Applications to SAP Fiori Launchpadā documentation for more details.
One thing that struck me as a little odd was the tile definition demo.Ā After the Tile Configuration part of the wizard was complete, we looked at the tile in the FLP, but it had different text (title and subtitle) to what had been defined. I can only think that this might have been the product of a video splice ā¦ and serves as a reminder of how hard it is to create even one tutorial video, let alone multiple series! (I speak from first hand here). So my hat goes off to the teams that turn out such great content for all these openSAP courses.
For some of the audience, it might have been worth just spending a bit of time explaining the BSP Application connection, when the deployment to an ABAP repository was shown. Of course, there isnāt a connection ā youāre not travelling back in time to the Business Server Pages technology arena; rather, itās just that the storage container for a BSP Application was deemed āgood enoughā to contain all the artifacts of a Fiori app ā it being a web app after all.
On the subject of deployment, this time to the FLP, it would have also been nice to mention the difference between the āwebappā folder and the ādistā folder created in such circumstances. But perhaps that will come up in another course.
Despite these gaps, it wasĀ a good overview unit that completed the picture.
Post Script: one of the self-test questions for this unit rather unfortunately suffered from perhaps being set by someone other than the person who wrote or presented the unit video content; there isnāt a single ācorrect process for deploying an SAP Fiori app to the cloud Launchpadā, and the officially correct answer doesnāt reflect the procedure that was shown in the video. Ah well, itās only a self-test :-)
Thatās about it for this weekĀ āĀ see you next time!
]]>I started running a couple of years ago, and as well as being pushed on by the promise of adrenaline and endorphins, I'm also driven by the stats. How far have I run this week, and this year to date? How does that compare to this time last year? What are my highlights, my averages, and total distances?
Originally opting for a Garmin Forerunner 110 running watch (with companion heart rate monitor strap), I now use a TomTom Runner Cardio. Both watches deliver similar functionality, which includes GPS-tracking (and therefore also distance, pace & elevation) and heart rate monitoring. Both watches therefore spit out a ton of data, which I have automatically uploaded to Endomondo, a sports tracking website, but I also maintain a Google spreadsheet with the stats. This is partly to remain somewhat independent of any particular sports tracking site, but also because the Google Apps Script platform allows me to build functions to make that data useful.
For example, I can expose spreadsheet data as JSON via my SheetAsJSON mechanism I created and wrote about back in 2013. Useful on its own, but when combined with the power of the SAP Web IDE templating for example, even better - see the video SAP Fiori Rapid Prototyping: SAP Web IDE and Google Docs below for more information on this.
I'm a big fan of the new User Experience (UX) that SAP is bringing to the world, in the shape of UI5 powered layers. The most obvious is of course the layer of Fiori goodness, but there's also the Fiori Launchpad (FLP) from where Fiori apps are served. I think we're just scratching the surface of this new lightweight portal. The tile concept, with the related tile groups, intents, catalog and role mechanisms is not only well thought out but also flexible enough for many situations - not just merely exposing apps to users.
I decided to combine the KPIs that I'm collecting from my running, and expose them in an FLP. It's early days, but already the visual at-a-glance layout is appealing, and with the responsive nature of the FLP (thanks to UI5) I can have it in my pocket too.
The figures you see in this screenshot are taken live and direct from the Google spreadsheet I mentioned earlier. The pointers to the values are set in the configuration of dynamic tiles in the FLP site I've created in my trial version of the SAP HANA Cloud Portal. So after I've finished a run and maintained the line of data in my spreadsheet, the aggregated information finds its way here automatically.
Now, I just happen to use a Google spreadsheet because I personally value the power and simplicity that the Google Apps platform and infrastructure has to offer. But if you have your data somewhere else, that's no problem. Of course, in a business context this is most likely going to be inside one or more of your SAP systems. And that's the simplest case. But with the flexible nature of the FLP, built on open standards such as HTTP, it's really only your imagination that is the limit.
If this has whetted your appetite for more information on using the SAP Fiori Launchpad as a dashboard for personal or work related information, and you want to find out more about dynamic tiles and what else is possible, you'll want to attend our webinar next month:
The SAP Fiori Launchpad as a Human-Centric Dashboard
It's an hour at lunchtime (GMT) on 26 April, free to attend, and may just be what you're looking for. See you there!
At SAP Inside Track Sheffield last year, one of the sessions I gave was āFixing up a nicer HCP Destinations tableā, where I showed the power of UI5 introspection and the Chrome Developer Tools that enabled us to modify the surface upon which we were standing, to improve things. I re-recorded my session as a video here, in case youāre interested:
Fixing the HCP cockpit titles
Anyway, thereās something else thatās been niggling me a bit while using the HANA Cloud Platform (HCP) cockpit. And thatās the inability to see which tabs in my Chrome browser are open at what particular areas of the cockpit. Due to the way each locationās title text is structured, all the tabs look the same ā at least at the start. Itās only when you hover over them you see what a given tab contains.
Hereās an example:
Itās only when hovering over the first tab that I see that itās showing the HTML5 Applications part of the cockpit. If Iām looking to switch to that tab, the search for the correctĀ one is painful.
So I wanted to take a quick look to see where this title was being set, and when.Ā I used the Chrome Developer Toolsā DOM breakpoints feature to halt when the title
element was changed:
This led me to a section of the HCP cockpit code that inside the Navigation Manager (cockpit.core.navigation.NavigationManager), in a function called ānavigateā. This is what the code that sets the title looks like (I took the liberty of formatting it a little bit better for readability):
You can see how the title string is constructed ā with the most significant part (current.navigationEntry.getTitle()
) buried deep within it.
A small change to this code, so it looks like this:
brings the most significant part to the front, meaning that now I can see what tab contains what HCP cockpit feature ā at a glance:
I think thatās a nice improvement. Personally, Iād love to see this make it into a future release. What do you think?
]]>Well weāre pretty much at the half-way point in this course, and itās going well. This week sees the end of the deadline extension for the Design Challenge that I wrote about in Week 3, but mainly is about introducing the course attendee to some basic hands on with the SAP Web IDE. Letās have a look at how this unit went.
Unit 1 āIntroduction to SAP Web IDEā. Iām guessing that the majority of this courseās attendees may well have some familiarity with SAPās now-flagship interactive development environment (IDE). It had an interesting genesis, growing from an initial offering called App Designer, which, remained a young product but seemed like it might be aiming to become, for UI5, what the Microsoft tools were for Visual Basic. There was also App Builder, which one might say was a competing product, from the Sybase stable. And how can we forget the tools and accelerators for variousĀ workstation versions of Eclipse.
What came out of that cloud of dust is what is becoming a very fine product indeed ā the SAP Web IDE. Technically based upon Eclipse, but Orion ā meaning the offering can be cloud-based, which it is. That said, there are the occasional releases of personal versions to run on oneās workstation, but of course these are still web-based ā youāre just running Orion locally.
This is not a new concept. While today no-one bats an eyelid when we talk of running web servers locally on our laptops, it was a big āahaā moment for me and many others back in the 1990ās when one of my all time heroes Jon Udell wrote about the concept in Byte magazine a long long time ago now. Thereās an online version of the article here: Distributed HTTP.
Anyway, I wonāt go into all the features of the Web IDE here ā find out for yourself in this unit. Youāre not attending the course? Get that sorted now!
Unit 2 āSAP Web IDE App Development Basicsā.Ā This unit covered the basics in terms of what features the Web IDE has to support the end-to-end process described in Unit 1 ā Prototype ā Develop ā Test ā Package & Deploy ā Extend.
One of the challenges when developing UI5 and Fiori apps is the data. Where is it, what does the structure look like, and is it actually available at all yet? The mock data services within the Web IDE go a long way towards smoothing over the answers to these questions. Often youāll be developing an app before a backend OData service is even available. Perhaps thatās your modus operandi anyway. And even if the OData service is built, there may be no actual data to test with. With the mock data services you can mock your backend very nicely. See the last unit of this week (towards the end of this post) for more content on mock data services.
And with the Web IDE you have many of the facilities youād expect in other IDEs ā code completion, the ability to lint and beautify,Ā integration with source code control (in the form of git, of course) and more.
As a (very) long time vim user, I have to say that an IDE (rather than ājustā an editor), and one in the cloud, is a concept that doesnāt come naturally to me. But what the Web IDE offers is too good to ignore, so itās become my companion working environment of choice. And I recommend it to clients and colleagues alike if theyāre wanting to start on their Fiori journey.
**Unit 3 āCreating Your First SAP Fiori App with SAP Web IDE Templatesā.Ā **One of the reasons why the Web IDE is my companion working environment is covered in this unit: App starter templates and reference apps. I built my own starter templates a good while ago for my own development workflow, but I donāt maintain them, and thereās only a coupleĀ of them. With the templates available in the Web IDE, you can hit the ground running and have a working basicĀ app in a matter of minutes.
And the templates are maintained too; this means that as the best practices improve, and the maturingĀ of UI5 continues, there are differences in approach that you want to make sure you capture. With the plugin based architecture of the Web IDE, you can even build your own templates. As an experiment, a good while ago now when this was quite new, I created a custom template that allowed you to get started quickly with test data from a Google spreadsheet:Ā SAP Fiori Rapid Prototyping: SAP Web IDE and Google Docs.
An equally good reason to look at what the Web IDE has to offer are the reference apps. These are full blown apps that are a great source of wonderment ā audited source code that covers myriad functions and mechanisms in real Fiori apps. You either enjoy reading source code or you donāt. If you do, youāre in for a treat. If you donāt, grasp the nettle and at least have a go. Iāve always maintained that reading other peopleās source code is educational (sometimes to see how not to do something!). And hereās no exception. The world of SAP development is changing ā use these resources to give yourself a leg up. And SAP ā more of this please!
Unit 4 āEnhancing Your SAP Fiori App with SAP Web IDEā.Ā This was a fairly straightforward unit, where the instructor takes us through a couple of examples of enhancing existing SAP Fiori apps. Rather than use the visual editor, we are shown the regular editor where the view is modified by adding XML elements as shown in the screenshot from the video below. This is fine, and my preferred modus operandi.
But I do wonder if attendees are thinking: āhow would I know that I could or should place an
If an XML element starts with a capital letter, itĀ represents a control. For example, we see on line 6 in the screenshot, an
If you look at the reference for this Object Header control in the excellent Explored app within the UI5 SDK ā sap.m.ObjectHeader ā youāll see in the Aggregations tab a list of aggregations. And there youāll find the āattributesā aggregation, which contains entities of type āObjectAttributeā. And these are the children that can be placed inside the āattributesā aggregation.
So yes, you guessed it ā whereas controls are represented as XML elements with capitalised names, aggregations are represented by XML elements starting with lowercase letters. So
Iām sure that this sort of information will be covered in much more detail in the upcoming openSAP course on SAPUI5 ā watch this space! :-)
Unit 4 āTesting an App with Mock Dataā.Ā Ahh, a pain point in every developerās workflow ā āWhere do I get the data to test?ā āWhen will the backend service be ready?ā āCan I start development sooner?ā. Ā Well with the mock data service things are a lot smoother than you might imagine.
Itās definitely worth studying the Web IDE features explained in this unit. While the OData Model Editor is still something Iād like to see improved (adding some visual aspects to the editing process, rather than still having to edit at the rather verbose EDMX level), itās still a great first step, especially with the visual display of the entities and their relationships:
Itās a shame that this wasnāt shown in this unit.
What was shown is the the Mock Data Editor is definitely easy to use and a great boost to your testing workflow. If youāre going to invest some time on this weekās content, this unit is where you should focus your efforts.
And with that, week 5 is just about to start. See you next time!
]]>Itās around this time of the week that the changeover between each weekās worth of content happens. Week 3 of the Build Your Own SAP Fiori App in the Cloud ā 2016 EditionĀ course has just come to a close, so itās time for me to write down my thoughts.
This weekās content was shorter than usual. Deliberately so, to give the attendees a better chance at completing the Design Challenge, which started in Unit 7 of Week 2 (see my comments for that unit in the previous post in this series). There were only three units, so letās have a look at those first, and then finish with a few observations on the Design Challenge.
Unit 1 āAnatomy of SAP Fiori Appsā.Ā I enjoyed this unit very much, as it really started to explain well how the rubber hits the road. At some stage, UX needs to turn into UI and become real. Using a combination ofĀ the excellent SAP Fiori Design Guidelines and the actual controls in the UI5 toolkit itself (see the Explored app for a great showcase of many of them)Ā ā advice and building blocks in harmony ā is a great way to get started on your Fiori app development journey.
Understanding the anatomy of a Fiori app ā fromĀ small controls such as ButtonsĀ to larger concepts such as the floorplans, and everything in between, can make the difference between creating a Fiori app, and a Fiori-like app. Here, I refer to Fiori-like as using the building blocks, but not in the right way.
**Unit 2Ā āIntroduction to SAPUI5 and ODataā.Ā **At less than 10 mins long, this unit was very short indeed, only providing a very high level introduction to two of the most important topics in Fiori ā UI5 and OData. One of the aspects of the openSAP courses Iāve become used to was the way the instructors oftenĀ squeezed as much out of each slideās content as possible. In this unit, I felt a lot of the detail was skipped.
That said, if it hadnāt have been skipped, I could imagine the unit being four times as long or more. There is more on OData coming later in this course, so letās hope that they dig in a little more. OData is a fascinating topic,Ā not least because itās REST-informed, based on an architectural style that I worked hard in the past trying to convince SAP of its merits :-)
](http:/www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ea8db790-0201-0010-af98-de15b6c1ee1a?overridelayout=true)
An old post on SCN from 2004 ā āReal Web Services with REST and ICFā ā where I expounded on the virtues of the REpresentational State Transfer (REST) based approach to data services ā¦ and was slightly disowned by SAP ;-)
**Unit 3 āIntroduction to Annotations and Smart Templatesā.Ā **This was a very interesting unit, not least because of the implications and the reasoning behind augmenting the metadata with extra semantics. Iāve written about semantic information before (see Semantic Pages, a post in the 30 Days of UI5 series). This time itās about adding extra information to the OData metadata to enable a more rapid construction ā in some cases automatic ā of UI5 based application components. A control, or set of controls, that can understand the data that is bound to it, is capable of more than acting passively.
Thereās a lot driving the concept of annotations and their use in smart controls and templates, not least SAPās need to produce yet more SAP Fiori apps, more quickly and more reliably. Finding a way for apps, or parts of apps, to write themselves, is going to help that process.
One thing that made me smile was the lovely conflict between Unit 2ās statement āOData model is based on ā¦ JSONā and Unit 3ās statement āOData is based on XMLā.
Of course, we all know that itās based on both. JSON and XML just happen to be used to provide the format for the payload ā there are different formats that the OData standard describes. But OData is also a protocol. Donāt let this confusion confuse you ā OData is about more than representing data, itās about describing operations upon that data too. The XML representation originatesĀ from the Atom Syndication Format (RFC4287) ā this informs the āformatā part of OData. The operations originate from the Atom Publishing Protocol (RFC5023) ā this informs the āprotocolā part of OData.
Design Challenge
And what of the Design Challenge? First, the deadline has just been extended by a week due to some system problems that were encountered. I think thatās a pretty generous extension, well done again openSAP folks for reacting in the right way. Iām on holiday this week and got some earache from M for working someĀ late night and early morning hours to get the submission in before today. Oh well :-)
As the deadline has been extended, I have to be careful not to give anything away here. But I can certainly make a few observations of my own. Iād say, all told, with putting together each of the deliverables for the challenge, doing the screen mockups and then using the online tools to create a prototype and then a study, it took a good few hours, not counting the idle time mulling over what problem I wanted to solve and persona for whom I wanted to address the needs.
I think it was because Iām not actually that used to the formal process, so things werenāt as smooth as they might be next time.Ā ThisĀ is part of the point, I guess ā getting us used to the Design Thinking methodology and learning about the process by being persuaded to address each step in turn. Although I think it was a valuable exercise, there is something in me that is ready to admit that I already had the design of the app in my head, and extrapolated backwards a little bit into the discover and design. But who said it was a linear flow? :-)
]]>Two significant community events are taking place this week. On Friday, there's the inaugural conference for UI5 aficionados - UI5con 2016 in Frankfurt, which dovetails with SAP Inside Track Frankfurt. The other event was the announcement of the new SAP Mentors Advisory Board for 2016-2018 at the start of the week.
For me, these events represent a couple of significant strands of community development within the SAP ecosphere.
What is a community, and how do they come about? Well, to answer that from an SAP ecosphere perspective, one might go back to the early 1990s, when the Internet was growing stronger by the day, but the Web was only a very young thing. In those days people communicated on the Internet mostly by group discussions facilitated either by the Network News Transport Protocol (NNTP) - a technology that has sadly all but disappeared along with others such as Gopher and WAIS - or email, specifically the trusty mailing list mechanism. There were no such thing as web forums and the SAP Community Network (SCN) wasn't even a twinkle in anyone's eye.
In 1995 two mailing lists were formed, independently and without knowledge of one another. One was called "sapr3-list", created by Bryan Thorp in Canada, and the other was called "merlin", created by me. The former list was focused specifically on R/3, whereas merlin still covered R/2 as well as R/3. Running and moderating a mailing list took a lot of effort, so Bryan and I were very happy to receive the superb offer of help from the Massachusetts Institute of Technology (MIT) - an SAP customer - and the lists merged to form the now-venerable SAP-R3-L, a name that still conjures up distant but happy memories for us.
Then in late 2002 I got involved with SAP and O'Reilly (for whom I'd written a book and was in the middle of writing a second), to work on an online forum style community space. We debated, discussed and planned the initial shape, style, spirit and indeed seed content for it, and in early 2003 it was born - the SAP Developer Network (SCN). In the early days we collaborated upon and wrote as much content as we could. One of the SAP contacts was Mark Finnern, now a good friend, and along with another good friend Piers Harding, and others, we worked on growing the community within the SDN.
And as you can guess, SDN eventually became SCN - the SAP Community Network - incorporating other previously satellite communities that had grown around what had been originally a more developer-focused one.
As you may know, Mark went on to found the SAP Mentor programme, which today is stronger than ever. So strong, in fact, that we find ourselves back where we started with this post, which is in the context of the newly formed SAP Mentors Advisory Board. This has been set up to nurture and guide the SAP Mentors engagement and activities into the next phase of its programme life.
So what about UI5con? SAP's adventures in open source and open protocols began a long time ago, when the Linux Lab was formed at SAP to investigate whether running R/3 on Linux was viable. The members of that little group contributed significant content to the Linux kernel codebase, especially in the area of memory management. SAP's fate with open source was sealed - with possibly the most significant recent event being the open sourcing of UI5, of course!
Fast forward to the present, and we see SAP presence at many open and public events, such as the Open Source Convention (OSCON), where for example in 2014 we gave a workshop on OpenUI5, and FOSDEM, where last year we engaged with the most critical of hackers to evangelise this awesome toolkit (which as you know, I hope, is the engine that's powering the Fiori revolution).
And so it was inevitable, due to the popularity and interest in UI5, that UI5con was born, in discussions, planning and dreaming over the last 12 months. It takes place this Friday, and there are some really great speakers lined up including some of my heroes from Bluefin of course!
This inaugural UI5con event is cohabiting time and space with perhaps what can be seen as the grandfather of modern SAP community events - SAP Inside Track. Born in 2009 in London, it has now seen countless instances that are now run all over the world - in places as exotic as the Carribean, Istanbul, Hyderabad and even Manchester! If you're in the SAP space, and want to learn from colleagues and take the next steps in building out your network, I'd strongly recommend you check out the SAP Inside Track movement and get involved. It is the classic community event, run by the community, for the community.
SAP Inside Track isn't the only event type - there are plenty of others. In particular I'd like to call out the SAP Code Jam events, which again take place all around the world, and on an even more frequent basis than the Inside Tracks.
The future of the SAP community, as a whole, looks in very good shape. With the SAP Mentors Advisory Board on the one hand, and the self-organising community events such as Inside Tracks and Code Jams, and of course now with the excellent openSAP Massive Open Online Course (MOOC) platform - there are so many opportunities to learn from, get involved with and shape our community into what it should be for the next 20 years. See you online!
Well, the weeks certainly come around fast in these openSAP courses, and so we find ourselves on Week 2 of the Build Your Own SAP Fiori App in the Cloud ā 2016 Edition. Hereās a quick run down of what was covered, with some thoughts from me.
Unit 1Ā āSAP Fiori 2.0 Overviewā. This first unitĀ gaveĀ a nice overview and introduction to the SAP Fiori 2.0 concepts. Yes, Fiori 2.0 is still conceptual in parts, but weāre already seeing practical output, in the form of the very real Overview Page mechanism, for example. There are plenty of new concepts for Fiori in the 2.0 design, such as the Viewport, the Control Space and the Copilot.
Some of these concepts are not new, but they donāt have to be; in fact one of the key tenets of Design Thinking, introduced in Unit 2, is āBuild on the ideas of othersā. I rather think that some of the ideas have been taken from the Dashboard concept that Nat Friedman built a good while ago ā I wrote a few posts on DashboardĀ back in 2003 and then later that decade.
Unit 2Ā āIntroduction to Design Thinkingā. This content isnāt new, in fact the openSAP folks stated that itās the same content as last yearās course. Nevertheless it was worth a brief re-introduction to set the scene for the design principles that are to come in Unit 3. The thing with Design Thinking, at least for me, is that itās all pretty obvious in theory, but putting it into practice requires effort.
I think the concepts around the pre-build phases of app delivery still need to be successfully and firmly landed in some organisations. Further, thereās a fine balance to be had between not letting technology (and developers) drive solutions, and designing something that would require a great deal of effort to implement. We have the tools (and the design principles) and know how to use them, so we should use that knowledge to inform the process.
Unit 3 āThe SAP Fiori Design Guidelinesā. Anyone whoās looked into SAP Fiori UX is likely to be at least lightly acquainted with these, either at 30000 feet ā with the 5 principles (Role-Based, Responsive, Simple, Coherent and Delightful)Ā āĀ or at ground level with the practicalĀ implementation advice in the online documentation. But as this course is soon to introduce the first hands-on element (designing an app), itās valid to re-introduce them at this stage, if not to set a level knowledge playing field for all participants.
I did like the explicit calling out of the concept of āFiori-likeā towards the end of this unit. Design is not black and white, and thereās been a long-standing question over whether non-SAP folks could call their apps āFioriā, or whether they had to say āFiori-likeā. Iāve maintained the position that if the design guidelines were followed, then they were āFioriā, not just āFiori-likeā. That said, with the title of the course we put together gave in the early days (three years ago!) ā āBuilding SAP Fiori-like UIs with SAPUI5ā, things werenāt so clear-cut :-)
Unit 4 āSAP Fiori Decomposition and Recompositionā. You might be forgiven for thinking that this process is a somewhat over-formalisation of what appears to be straightforward:Ā The extraction of functionality from the ākitchen-sinkā transaction-based approach of the traditional SAP experience into smaller role and task focused applications, sometimes combining functionality from previously separate transactions. Sure, thatās what it is.
But itās more than that, I think, when seen as a complete process. The functionality being extracted is predominantly being extracted from a proprietary context, and reconstituted into a neutral, platform-independent and responsive context. Weāve been stuck too long in the world of proprietary, tethered too much to the desktop with SAP because of the Microsoft disease that has set hold in enterprises in the last couple of decades or more. So SAP targets that market and the only real experience for many has been SAPGUI for Windows, an experience that is so far from being portable it became one of the catalysts for Fiori. Recomposing functionality into the context of the one true native platform ā the Web, is a great move for SAP.
Unit 5 āThe Importance of Prototypingā.Ā Like Design Thinking but perhaps less so, prototyping phases are sometimes difficult to bring about and get the most out of, especially when deadlines and budgets are tight. Organisations need to work out the value for themselves for the Discover and Design phases, rather than just focus on the Develop and Deploy phases ā¦ especially those that involve the business.
Thereās a leap of faith thatās required, and weāre all responsible for helping makeĀ that happen.
On mockups and protoyping, especially in the early stages, Iām a fan of the simplest thing that could possibly work, which is pencil and paper. Low cost, discardable, and folks arenāt distracted by debating what colour a button should be.
Moving into the benefits of the later stages of prototyping, Iām reminded of one of the founding beliefs of the Internet Engineering Task Force (IETF) ā one of the bodies that maintain standards that mean that we can all simply take the Internet and all its children (such as the Web) for granted. This belief is āgeneral consensus and running codeā (from the Tao of IETF). Showing a working model of something youāre trying to convey is very valuable indeed.
(I did take issue with the stated ācorrectā answer to one of the self-test questions on this unit: Q āHow can app implementation be inexpensive?ā Ā A: āIf enough iteration, prototyping and validation is done beforehandā. That might mean the UX is right, but it doesnāt imply that making that UX happen is easy!)
Unit 6 āPrototyping 101 Using SAP Splash and Buildā. Following on from my comments earlier about adoption and landing of the Discover and Design phases, this unit contained an overview of the two tools that help to support those stages ā Splash and Build respectively. Itās a longer than usual unit with 25 mins of video, but worth watching if you havenāt already seen or played with these tools.
The tools themselves are already very accomplished, but I do wonder how much the studies have actually been employed thus far. Itās partially a circular situationĀ āĀ the tools wonāt be used unless Discover and Design are more strongly adopted, and the adoptionĀ has a better chance of taking hold if tools like this deliver on their promise.
One of the things that caught me eye (that hadnāt been available in the early-access Iād had to Splash a while ago) is the GalleryĀ of existing designs. Iām looking forward to browsing through, and seeing how they complement the Fiori Design Guidelines.
Unit 7 āDesign Challengeā. The last unit of this week sees the start of the first of the two practical portions of this course. This is the design challenge, where we must put into practice what weāve learned this week ā covering the Discovery and Design phases of the end to end process, including use of Splash and Build. Giving and receiving feedback from course peers is also involved, which is a nice way to scale this, and also an opportunity for me to see feedback stats in these tools that has come from someone other than me!
This design challenge lasts two weeks, which means that although thereās an assignment to complete this week, there isnāt one next week ā so that we have time to complete the first part of the challenge (submitting the design mockups) without needing to spend more time than budgeted for this course. A nice move.
ā
Thereās one more aspect of this course that I wanted to mention last time but didnāt get round to it. At the end of each week, thereās a ājust in timeā video blog entry which gives the course creators and instructors a chance to impart last minute information and changes. I like this aspect, and the relaxed nature of how itās presented. Indeed, with the contents of this course being based on software-as-a-service products in the cloud, and with the changes that happen on a monthly basis, itās a good idea.
And on this note of constant change, Iāll leave you with one thought from Martin Fowler. I was reminded of this wisdom via Sascha Wenningerās tweet this morning: āIf itĀ hurts, do it more oftenā :-)
(For links to commentary for further weeks, see the first post in this series:Ā āfiux2ā³ ā The openSAP Fiori Course ā 2016 Edition.)
]]>āfiux2ā³ Week 2 ā Design Your First SAP Fiori App
āfiux2ā³ Week 3 ā Get Ready to Create Your First App
āfiux2ā³ Week 4 ā Create Your First SAP Fiori App
āfiux2ā³ Week 5Ā āĀ Enhance an SAP Fiori App
āfiux2ā³ Week 6 ā Extend SAP Fiori Apps
āfiux2ā³ WeekĀ 7Ā āĀ Build Your Own SAP Fiori App
Iāve written about the openSAP Massive Open Online Courses (MOOC) system in the past. Iām a big fan, particularly for the way the folks run the ship. They are āopenā in the best possible ways.
Anyway, last week saw the start of the much anticipated 2016 Edition of the course āBuild Your Own SAP Fiori App in the Cloudā, aka āfiux2ā³. Iām enrolled and have just completed the first week, as many of my colleagues have too.
Iām a great believer in learning and re-learning subjects, especially from different sources. Even if you feel you know a good chunk ofĀ a given topic, learning with new material, and from different angles, will give you not only knowledge reinforcement but also new nuggets which are therefore also more easily digestible.Ā If only for this reason, Iād recommend this course to you. The time commitment isnāt unreasonable, thereās hands-on, and even a competition!
And in case you need a little bit more convincing,Ā I thought it might be fun and perhaps useful to write a short post each week, describing what we learned. So here goes with Week 1.
Week 1 ā Get to Know SAP Fiori UX
A gentle start to the content this week, with a balance of marketing (you knew it was going to come, best to get it over with in the first week) and a 30,000 feet view of the UX strategy. If youāve followed the SAP Fiori UX revolution at all, it should all be fairly familiar to you. We also covered the SAP HANA Cloud Platform (HCP) and the keyĀ role it plays in the extension concept generally, and in particular for Fiori apps.
Within the context of HCPĀ and the cloud centric approach we looked at various tools that are available for different stages in the Fiori app journey, from discovery (Splash), through design (Build) to development (WebIDE). There was also mention of the Rapid Development Solution (RDS) available, and the wealth of documentation that was to be had.
WeĀ looked at SAP Fiori in the cloud, with a specific focus on Fiori-as-a-Service (FaaS) and the HANA Cloud Portal, where you can build sites including Launchpad-specific landing pages.
Finally the focus ended upon S/4HANA and how Fiori UX fits directly as the strategic way forward. Now I think itās fair to say that the statement āSAP Fiori is the default user experience for S/4HANAā, butĀ If Iām not mistaken, one of the lecturersĀ gave the impression that itās the only user experience. Yep, here, in Week 01 Unit 06:
āWe provide Fiori, and Fiori is theĀ default user experience for S/4HANA.Ā Thereās no other user experience there.Weāre not using SAP GUI, weāre barely using any other user experience like NetWeaver business client.ā
Now, thatās not quite true, is it? S/4HANA, architecturally, is still based upon an ABAP Stack system, which means the venerable R/3 architecture is still in play. Yes of course we have HANA underneath and Fiori on top, but in the middle we still have DISP+WORK and all the wonderful SAPGUI and ABAP goodness that we know and love. And while there are huge inroads made already in the new Simplified suite, thereās always SAPGUI.
Iāll just put that down to the lecturerās excitement and enthusiasm :-)
Overall, it was worth the hour or three invested in watching the videos**. And the assignment at the end of this week was pretty straightforward, as long as you were prepared for translating marketing speak into specific questions.
Iāll leave you with one thought, which we may well pick up in week 2. Hereās a screenshot, from one of the slides of an S/4HANA Fiori app:
In relation to the Fiori 2.0 designs, which are being introduced in week 2, there were some comments on Twitter today about the perceived complexity of the newer apps. I think weāre already seeing complexity here, but I donāt think itās necessarily the end of the world. Some things do require you to see more information. Not everything is as simple as approving a purchase requisition or booking leave.
**Iād recommend the openSAP mobile app (available for Android and iOS) which now works properly ā i.e. it doesnāt immediately crash when you try to download content to watch offline.
]]>Building Blocks
That theme is the concept of basic building blocks with which vast cathedrals can be constructed. Those building blocks are, in Lisp terms at least, car
, cdr
and cons
.
One of my companions on this path is Daniel Higginbotham's Clojure for the Brave and True. In Part II, covering Language Fundamentals, Clojure's abstractions, or interfaces, are discussed. One of the Clojure philosophies is that the abstraction idea allows a simplified collection of functions that work across a range of different data structures. Abstracting action patterns from concrete implementations allows this to happen. This is nicely illustrated with a look the first
, rest
and cons
functions from the sequence (or 'seq') abstraction.
There's a close parallel between first
, rest
& cons
in Clojure and car
, cdr
& cons
in other Lisps such as Scheme. And there's an inherent and implicit beauty in a collection of constructs so simple yet collectively so powerful. You can read about the origins of the terms car
and cdr
on the Wikipedia page, which have a depth and a degree of venerability of their own. Essentially both sets of functions implement a linked list, which can be simply illustrated, as shown in the book and elsewhere, as a sequence of connected nodes, like this:
node1 node2 node3
+--------------+ +--------------+ +--------------+
| value | next |-->| value | next |-->| value | next |
+--------------+ +--------------+ +--------------+
| | |
V V V
"one" "two" "three"
Implementing a linked list
Daniel goes on to show how such a linked list of nodes like this, along with the three functions, can be simply implemented in, say, JavaScript. Given that these nodes could be represented like this in JavaScript:
node3 = { value: "three", next: null }
node2 = { value: "two", next: node3 }
node1 = { value: "one", next: node2 }
then the first
, rest
and cons
functions could be implemented as follows:
function first(n) { return n.value; }
function rest(n) { return n.next; }
function cons(newval, n) { return { value: newval, next: n }; }
With those basic building blocks implemented, you can even build the next level, for example, he shows that map might be implemented thus:
function map(s, f) {
if (s === null) {
return null;
} else {
return cons(f(first(s)), map(rest(s), f));
}
}
To me, there's a beauty there that is twofold. It's implemented using the three core functions we've already seen, the core atoms, if you will. Moreover, there's a beauty in the recursion and the "first and rest pattern" I touched upon earlier in "A meditation on reduction".
Using the building blocks
Let's look at another example of how those simple building blocks are put together to form something greater. This time, we'll take inspiration from a presentation by Marc Feeley: "The 90 minute Scheme to C compiler". In a slide on tail calls and garbage collection, the sample code, in Scheme (a dialect of Lisp), is shown with a tail call recursion approach thus:
(define f
(lambda (n x)
(if (= n 0)
(car x)
(f (- n 1)
(cons (cdr x)
(+ (car x)
(cdr x)))))))
If you stare long enough at this you'll realise two things: It really only uses the core functions car
(first
), cdr
(rest
) and cons
. And it's a little generator for finding the Nth term of the Fibonacci sequence:
(f 20 (cons 1 1)) ; => 10946
I love that even the example call uses cons
to construct the second parameter.
I read today, in "Farewell, Marvin Minsky (1927ā2016)" by Stephen Wolfram, how Marvin said that "programming languages are the only ones that people are expected to learn to write before they can read". This is a great observation, and one that I'd like to think about a bit more. But before I do, I'd at least like to consider that studying the building blocks of language helps in reading, as well as writing.
]]>As it might be too obvious to compare it with the multi-faceted nature of a diamond, I'm going to compare HCP instead with the trusty 20-sided die that played a big part in my youth, as the random number generator for role playing games such as Dungeons & Dragons.
So in the context of SAP and the cloud, I roll the die, and land a 20. Let's look up what that translates to. Ah yes, the Java runtime on HCP. As the documentation says:
"You can develop applications for SAP HANA Cloud Platform just like for any application server. SAP HANA Cloud Platform applications can be based on the Java EE Web application model. You can use programming logic that is well-known to you, and benefit from the advantages of Java EE, which defines the application frontend. Inside, you can embed the usage of the services provided by the platform."
This already counts for a lot on your scorecard - let's look at why.
The nature of the UI5 toolkit and the architecture behind how Fiori apps are built already open up the SAP application ecosphere to the wider world of application developers, due to the adoption of open standards plus a language and programming model (HTML5) that is well-known to large groups of non-SAP developers.
In the same way, this Java Enterprise Edition (EE) Web application model that is supported by HCP opens up the platform to many a talented group of developers who may not know much about, say, ABAP and traditional R/3 architecture, but can certainly build apps that can now, in the context of your cloud or hybrid SAP landscape, add value and turn innovative business ideas into reality.
SAP embraced Java a long time ago, and now that relationship has matured, we see a couple of things: SAP's investment in the Java Virtual Machine (JVM), and in the Java development and runtime ecosphere. Let's examine the first of these two.
Like the mythical centaur, the HCP has two hearts, one of which is the JVM - the target runtime platform for those Java applications that we're contemplating right now.
Java is a language that compiles to bytecode, an instruction set for the Java virtual machine (VM) which is the equivalent of machine code for an actual machine. And I would posit that it is not only the adoption of Java as a language specification amongst enterprises the world over, but also the ubiquity of Java's runtime environment, the JVM, where Java applications can run, that is behind the real success of this language and community. (There's a parallel here with web browsers being a hugely distributed platform for executing JavaScript, but that's a story for another time).
In fact, I would suggest that rather than just Java per se, it's actually the JVM as a target runtime that makes SAP's HCP shine as a platform for business applications. And here's why.
The ubiquity of the JVM has not unsurprisingly attracted language developers to view it as a runtime platform for their particular languages. Today it's not just applications written in Java that can run on the JVM. There are many languages, some of them rather important, that compile to Java bytecode, and therefore - as far as the JVM is concerned, are equal execution candidates. You can peruse these JVM languages on Wikipedia, but here are a few that come to mind:
Clojure - a dialect of Lisp which champions functional programming and immutability.
Scala - an object oriented language with functional programming aspects.
JRuby & JPython - JVM versions of the well-known Ruby and Python languages.
Functional programming is a particular focus of mine right now, for many reasons (one being the ability to build solid code where whole classes of errors just don't exist), and so I have already been experimenting with Clojure apps on the HCP platform. But more generally, if your enterprise has different teams of developers -- it's not atypical to see "SAP developer teams" and "others" -- the SAP HANA Cloud Platform may be the shared runtime catalyst for closer collaboration and mindshare for your next generation of enterprise applications.
At SAP TechEd EMEA 2015 in Barcelona, I had the honour of talking to SAP's Jeff Word, as a guest on his show The HANA Effect. We discussed HCP, these JVM related considerations, functional programming, and more. If you're interested in hearing the show and finding out more about the future of HCP-powered development, head on over to our podcast episode 38 "Proudly hacking since 1987" and have a listen. The future of SAP enterprise applications is a good place!
Looking at the event location, and the subjects that our guest speakers are going to talk upon, there's a pattern that emerges quite clearly. When you add in the Design Thinking workshops which are part of the day, the pattern becomes almost too obvious to state.
That pattern is people-centricity. All our guest local authority speakers are talking on subjects that have people & their activities first and foremost: "citizen engagement", "mobile", "employee revolution". The digital devolution will be about putting people first. Citizens and staff alike. Hey - staff are citizens too! So we're just talking sets and supersets of people generally, right?
It's unlikely to have escaped your notice that the consumerisation of IT has matured into something more tangible, especially in the SAP-flavoured software space. The tired user interface that we know and love, SAPGUI, is being slowly but surely replaced by a renewal of the whole user experience, or "UX", as I talked about earlier this week in my UK & Ireland SAP User Group Conference session "Can I build a Fiori app? Yes you can!".
This renewal has been represented by the arrival of SAP Fiori, that entered the scene in 2013 and since then has gone from strength to strength, not only as a set of apps (that has grown from an initial 25 to over 600 today) but also as a rich collection of standards & best practice approaches. Perhaps most pertinently, however, we see SAP Fiori as being the vanguard, the herald, of a whole new focus on design, on how the user can be best served, and on a surprisingly refreshing "prequel" to the process of building great apps.
Today, SAP is not your father's, or your mother's SAP. It's not even your grandparents' SAP. It's an organisation with a focus on the person like never before. The user. And in our context, the citizen. SAP's Chief Design Officer (they have a Chief Design Officer!) has championed Design Thinking, driven the Fiori revolution, and with his teams, has spearheaded a new breed of toolset and workflow that kicks in before a single line of user interface code is written.
That toolset and workflow is embodied in SAP's UX-as-a-Service (UXaaS), and can best be contextualised within the Discover - Design - Develop - Deploy diagram that represents what UXaaS covers.
There are two key points here for me and for you.
Oh, but what about the location of our event, you ask? Well, that's people centric too. Manchester Central Library has undergone a wonderful transformation recently and is now pretty much fully re-opened to the public. A public that can bootstrap itself to better and higher things with the knowledge and community within, as is true for libraries across the country. But you guessed that already, right?
Come along to the library in a couple of weeks and find out more.
I was talking to my friend and colleague and fellow SAP Mentor Chris KernaghanĀ on the evening of Day 1 about levelling up in JavaScript, and had planned to go through some JavaScript goodness with him during the conference. But then IĀ thought it might be fun for others to come along too. So hereās the (rough) plan:
ā meet at the Bluefin Solution stand on the show floor on Tue 24 Nov (last day of #UKISUG15) at 1300 (should give you time to grab a bit of lunch first) if youāre interested in coming along (thatās TODAY, folks!)
ā weāll find (or will have found) a room to use ā probably Level 5 Hall 8b (the SAP Mentors Track room)
āĀ the session will be between 30 and 45 mins max
This session will be for those already with some basic JavaScript skills, and / or those interested in using JavaScript to manipulate data inside an SAP Fiori app. Even if you have no JavaScript skills, you might find it interesting. And if you donāt, you can always just laugh at me.
The session is titled āProgramming in a more functional style in JavaScriptā and weāll go through handling and manipulating data from a running Fiori app. Handling and manipulating data like this is a common use case. Becoming more familiar with doing this will help you build your skills and confidence in the future normal of SAP frontend development. Learning to do this in a functional way willĀ give you a basis to write more solid and reliable code.
See you at the Bluefin stand at 1300 today!
]]>Iām looking forward to a packed set of days next week in Barcelona, where SAP TechEd EMEA 2015 is taking place. Itās packed in many ways: so much to share, so much to learn, so many people to meet and re-meet, so many kilometres to walk in the convention centre, and so much coffee.
Iām involved in a number of activities during the event. Iām co-presenting a number of hands-on sessions relating to SAP HANA Cloud Platform (HCP) and the SAP Web IDE, Fiori, and of course the fantastic UI5 toolkit. Hereās a quick summary (in my own words):
UX260 Experience SAP Fiori on SAP HANA Cloud Platform
SAP HANA Cloud Platform (HCP) is changing the Fiori landscape in so many ways, by offering features and services all the way from development, through connectivity & deployment, to exposure to end users. This session gives you hands-on experience at the places where Fiori and HCP meet.
UX261 Extend a Fiori App with SAP Web IDE based on beautiful Fiori Reference Apps
The key to the present and future success of Fiori in your organisation is the correct adoption of best practices, both from a general development perspective and also from an extension perspective (for standard SAP delivered Fiori apps). Thereās a lot to get wrong, but a ton of help for you to get it right. This session shows you what and how.
UX262 Building SAPUI5 Applications using SAP Web IDE
The SAP Web IDE is the cloud-based IDE that can. From a standing start not too long ago, itās progressed into an impressive set of features, not least those involving the templating and plugin system. This (4hr) hands-on session, which is being given twice next week, covers these subjects and more in particular the rather exciting Runtime Adaptation (RTA) features of UI5. Oh yes!
In addition to these hands-on sessions, Iām taking part in a couple of SAP TechEd Live Studio events ā Iām interviewing one of my heroes on SAP Development Tools (and in particular those available on HCP), and taking part in a live panel discussion on developer engagement. Stay tuned!
]]>UPDATED: 09 Nov 2015 - Now available as a Kindle version on Amazon!
Last month, I wrote about our "30 Days of UI5" series of blog posts in The advent of UI5 1.30 and what it means for us. It was a joint effort between a number of our developer community team members, and SAP colleagues too. It exists in its original form as a series of 30 blog posts, but for SAP TechEd 2015 we're releasing the series as a complete electronic version, for your reading pleasure!
It seems fitting to release it right now. While I'm not at SAP TechEd USA in Las Vegas (except in spirit, as I seem to have appeared* in the hallways of the SAP TechEd convention centre, *blush*) I know that there will be plenty of transactional and application eye candy on screens all around the place.
And you can bet your bottom dollar that much of it will be powered by the engine of that Fiori revolution - SAP's UI5 toolkit.
So to celebrate that, to thank the super teams of developers and designers at SAP who have nurtured UI5 from the ground up, and to share with you some hopefully interesting and intricate aspects of that toolkit, it seems appropriate to release the series as a downloadable whole. As a bonus, it has a great foreword from one of the core UI5 developers Andreas Kunz, with some great history of where UI5 came from. Worth the download alone!
First up, for SAP TechEd USA in Las Vegas this week, we have the PDF version. It's less than 100 pages, so go nuts, buy some ink, print it off, and enjoy a relaxing soak in the bath with it.
If you're more of a Kindle person, then we'll be releasing a version for you in time for SAP TechEd EMEA in Barcelona in November, so watch this space! UPDATE: Now available!
So, without further ado, the PDF version of 30 Days of UI5 is available here: 30 Days of UI5 - PDF Version.
Actually, one of the things about Clojure that appeals to me is that it is a Lisp. One of the books I remember buying when I was still in my early teens was Artificial Intelligence by Patrick Henry Winston. I still have it, a second printing from 1979. While I didn't understand very much of it, I was somewhat mesmerised by the Lisp forms, written in all caps and with beautiful bracket symmetry. Lisp cropped up again for me a few years later, in the amazing book Gƶdel, Escher, Bach by Douglas Hofstadter, and it was equally mesmerising.
So when I finally discovered Clojure, I decided to delve beneath the shimmering surface that had heretofore had me transfixed, and experience the beauty from within.
One of the recurring patterns emerging from what I read, even at that early stage, was that of "head and tail". This is alternatively known as "first and rest", or, going back to early Lisp origins, "CAR and CDR". Given a sequence, the idea is that you can get hold of the first item, and everything but the first item (the rest), as two separate addressable entities. You do something with the first item, and then repeat the process where the sequence becomes what you had just before identified as the rest.
There's something appealingly simple in this pattern, not just because it's something that we can all immediately understand, but also because there's an unspoken truth that is about the general approach of sequential data structures, and data structure processing in general. It can perhaps be summed up nicely in an epigram from Alan Perlis thus:
"It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
]]>Way back in the mists of time, at an SAP TechEd conference in the early 2000s, I saw the future. It was in the shape of a small box, with a credit card sized slot, and it was then billed as the JavaStation. It was a small network computer from Sun.
The idea of a network computer wasn't exactly new, and wasn't that different to the X terminal concept. What completely bowled me over was the seemingly magic session managment, that was based around a physical credit card sized identity module. You could "hot-remove" it from the JavaStation you were at, walk over to another, insert it, and see your session recreated instantly on that new network computer.
It must have made a lasting impression on me, because I can still remember the experience as if it was yesterday. As I cut my young computing teeth on terminal-based computing, and indeed built my own X terminals for kicks, I was no stranger to the power and utility of the computing architecture of the day - minicomputers and mainframe computers housed in some remote facility, to which you were connected via a generic interface (a VT100 or 3270 type terminal, or an X terminal) that itself had little relevant computing horsepower - just enough to achieve connectivity and display.
Fast forward to today, with the SAP HANA Cloud Platform (HCP) rapidly becoming the goto platform for new apps, and extensions to existing apps, and the meeting point for on-premise and other cloud systems.
One of the subscriptions available within this Platform-as-a-Service (PaaS) offering is the SAP Web IDE - a still relatively young but already very accomplished Interactive Development Environment (IDE). While not everyone's current primary environment for experimentation and development, it is already far beyond just being a serviceable tool for building apps. If you step back and look at the bigger picture that SAP has for its cloud platform offering as a whole, the SAP Web IDE is an incredibly important artifact on the roadmap that SAP envisions (briefly: on premise standard, custom extensions and new apps in the cloud).
And when you couple the SAP Web IDE with some of the other facilities that come as standard with SAP HCP - such as a git repository server for source code management, automatic deployment mechanisms, and the Destinations facility within the wider connectivity services, you have everything you need to build non-trivial apps that reach out to SAP and non-SAP systems. Furthermore, significant facilities within the SAP Web IDE are being added on a regular basis.
And so the story turns to an event earlier this month - SAP Inside Track Sheffield. On Day 2 of this event, I led an all day workshop on Fiori and UI5 development (if you're interested, I've made the exercise document available here: Fiori Products App Development Workshop - Exercise Document).
The facilities at Sheffield Hallam University, our awesome hosts for the event (thanks to Steve Lofthouse), were great, and even extended to proper classrooms for the breakout sessions and workshop. On the day of the workshop, I arrived to set up. We had been given optional access to the PCs in the classroom - student PCs and an instructor PC at the front. These PCs were run of the mill, nothing wrong with them per se, although they were less than ideal as usable workstations, in that they were running Microsoft Windows, in lieu of a proper operating system. But what they all did have was a modern web browser - Chrome.
SAP HCP administration, and access to all HCP's facilities, is via a cockpit that is web based (and built on UI5, of course). The SAP Web IDE is also browser based (and also built with UI5, along with Orion).
I'd never used this particular instructor PC before, nor had the workshop attendees ever used the student PCs in front of them. But all they needed was to fire up Chrome and connect to their HCP trial accounts, from where they could manage their app software, define destinations, access reference source code on Github, and use the SAP Web IDE to develop, test and then deploy their solutions to the workshop exercises. And inevitably (and intentionally) debug those apps, in flight, too (using the super powerful Chrome Developer Tools).
While we weren't working on JavaStations, we were enjoying the equivalent power and approach that those early network computers championed. Our whole development workflow, even my own workshop design, preparation and documentation, was all done from a single facility - a web browser. Granted, that web browser instance was mostly on my laptop, but that's just because that's the workstation to which I have closest physical access.
But I've tweaked and extended the workshop on various machines over the workshop's history, and used different machines to deliver it too. The link between my work, my environment and the physical machine I happen to be using is very loose; I can switch between web browsers on different machines as easily (perhaps not as magically) as was demonstrated with the JavaStation session management facilities.
With the advent of the cloud, combined with the web, developer workflow is changing, and developer productivity for many of your SAP projects, in the context of HCP and SAP's strategic direction for itself and its customers (you and me), is moving towards this idea of a modern take on the network computer. I'd argue that for many SAP Fiori extension projects, to take a relevant example of the work that goes on today, all one needs is a browser.
Embrace what HCP has to offer, understand what SAP's direction is in this regard, and ready yourself for the cloud, the new mainframe, with the simpler workstation software requirements that go hand in hand with that. Developer workflows that exist entirely in the cloud are here today.
UI5 started out inside SAP back in late 2008, and it has been available to us as customers and partners since around 2012 (I wrote about an early 1.2 beta version back then). By now most, if not all of us, should be aware of SAP Fiori UX and what it represents. The growth of the Fiori application design patterns and implementations has been nothing short of stellar. And its technical success is all down to the UI5 toolkit that is lovingly nurtured and tended by an awesome group of modest heroes in SAP.
To celebrate the advent of UI5 1.30, I set a goal of building a series of 30 daily blog posts on UI5.
With the help of of friends and colleagues here at Bluefin and also at SAP, this goal was reached, and exists in the form of a blog post series called 30 Days of UI5, or "30UI5" for short. I've written more about this in my post Building blocks for the future normal, where you can read more about UI5 in the context of S/4HANA. Otherwise, just head on over to the series and take your pick from the titles. Some are technical, others less so. The final post in the series is by Sam Yen, SAP's Chief Design Officer. Titled The origin of becoming a fundamental enabler for Fiori, it gives some great insight into the origins of UI5 and Fiori.
To explain what the advent of 1.30 means for us, for you, and for SAP's continued UX revolution, we'll have a brief look at some of the recent innovations. Here's my top 5 list of innovations and why they're important. You can find links to these and more from the What's New page for the 1.30 release.
Perhaps this innovation is the least obvious. Let's take a look at the context of the 1.30 stable release announcement. It appeared yesterday on the OpenUI5 blog: New stable Release: OpenUI5 1.30. SAP open-sourced the UI5 toolkit back in 2013. But this act was no empty gesture; the UI5 codebase that powers our enterprise future normal continues to be developed in full view, and in cooperation with customers, partners and developers. And 1.30 was available first in the open source flavour. What does that mean for us? Innovation and scrutiny of the highest degree, bringing a quality and thoroughness that can only be achieved by such an open process.
This is a general innovation that sees the UI5 toolkit move towards an "asynchronous-first" loading approach for resources such as views and controllers. Performance is a key foundational aspect of the UX revolution, and alongside the existing network traffic improvement techniques such as JavaScript minification and the "preload" mechanism (compression of all application resources into a single file), asynchronous loading will sharpen up the performance of Fiori apps, resulting in happier users. For more info, see the 30UI5 post An introduction to sap.ui.define.
One aspect of OData that differentiates it from other data sources is that it's server-side based, rather than client-side based. But another aspect is that the data represented comes complete with metadata and annotations. An OData service bristles with knowledge about itself and details about the entities that it exposes. So much so that it is possible to take advantage of this in applications, where developers can use UI5 features such as the OData Meta Model mechanism, and the Annotation Helper to use metadata expressions to enhance aspects of data binding. The result is that building helper and formatting functions in Fiori apps becomes simpler and more declarative, with fewer moving parts and fewer places for things to go wrong.
The standard SAP Fiori apps are built upon a scaffolding layer that provides consistency of architecture, function and design. This scaffolding layer is, however, internal-only, (deliberately) undocumented and not recommended for customer use. This doesn't mean we can't build SAP Fiori apps ourselves, far from it. But it does mean that it's pending deprecation might leave us bereft of good technical support for building apps with that consistent design. This Semantic Page control and its relations are the first steps to providing a proper replacement for the monolithic scaffolding mechanisms, resulting in the possibility of a more standardised approach to realising Fiori designs in customer scenarios. For more info, see the 30UI5 post Semantic Pages.
Version 1.30 sees a family of great tutorials available within the Software Development Kit (SDK) itself. Beyond the obligatory Hello World! tutorial, there is a 35-step Walkthrough that takes the reader through many of the key aspects of developing with UI5, a new 17-step tutorial on Navigation and Routing which has long been anticipated, and a 15-step Data Binding tutorial. While in the past developers had to make the most of the scattered examples throughout the SDK to learn, or infer, best practices, there is now no excuse for not knowing how to do things, and how to do them right. With these tutorials, your Fiori developers are now equipped with the right knowledge to build robust applications and custom extensions in your organisation.
UI5 1.30 is here now, already available in the form of OpenUI5 through the CDN, and coming to a frontend server near you in the form of SAPUI5 soon. You can track availability of SAPUI5 through the Maintenance Status page. Get ready to embrace the innovations and make them work for your organisation!
I've just finished curating and contributing to a series called 30 Days of UI5. It is a set of 30 daily blog posts on the subject of UI5, the toolkit that powers SAP Fiori UX. The posts were written not only by me, but also by some of my illustrious Bluefin Solutions colleagues such as John Murray, Sean Campbell, James Hale, John Appleby, Chris Choy, Nathan Adams and Jon Gregory. Not only that, we had some great contributions from our SAP colleagues Thilo Seidel and SAP's Chief Design Officer Sam Yen. Awesome work.
UI5, in its original SAP-licenced flavour "SAPUI5", and the open sourced flavour "OpenUI5", was born in late 2008, and has reached a level of maturity today to the extent that a milestone release, 1.30, is imminent (hence the 30 days idea!). But more importantly than this age-based and version-based maturity is the simple fact that UI5 powers the Fiori revolution.
"Yes, yawn, we all know that", I hear you say. Maybe you do. I've been saying it often enough, and I'm still proud of the designers and developers at SAP who have made this happen and continue to make it happen. But perhaps what's even more important to realise is what that means for us, for me and for you, in the context of SAP Fiori and more specifically in the context of S/4HANA, the future normal.
Step back in time with me for a second, to R/2. If that's too far, let's just go back to R/3. If that's too far, let's look at your SAP systems today. What are they capable of? How malleable is the UI layer, how can you imagine modifying or extending it to suit your business processes? The answer may depend somewhat on the particular technology involved (classic dynpro, web dynpro, or even the Persona layer you're using), but the point is, you've become innately aware of how the UI layer can be stretched and improved - where it stretches naturally, and where it stresses and breaks.
With the future normal, that is changing for your business users. With a complete Fiori-based frontend, the rules of the game are different. Different in what can be achieved, different in how things can be achieved, and different in how things should be achieved.
What does that mean? Well, the mechanics are fundamentally different. Outside-in applications written in UI5, the toolkit supplying the power, the libraries, the design and the runtime for Fiori are a different prospect, a different platform, and a different context for your developers and your design teams. But there's more. Not only do we have a change in technology, in particular in relation to extending existing SAP-supplied apps, but there's also a understandably strict set of SAP Fiori design guidelines, painstakingly put together, and followed by the application builders. Fiori's philosophy includes a different way at looking at applications and how they should exist and relate. And implicitly, how they should be extended and copied.
Understanding anything fully starts with the foundations. Understanding the future normal of SAP starts with Fiori and UI5, at least, looking at it through the eyes of your business process owners and users.
And it just so happens that we have an upcoming event tuned to exactly that S/4HANA: Understanding the future normal :-) It's free, there's breakfast and coffee, and there are still some spaces left. So see you there!
Not too long ago, before Fiori was Fiori, SAP had tried several timesĀ to refresh the user experience. Ā Iām aware of over 20 different UI technologies that we have used since the release of R/3. Ā As mobility was sweeping into the enterprise, SAP adopted a native mobile development approach. Ā At the time, many believed that thisĀ was an opportunity to create modern experiences with modern UI technologies (primarily iOS at the time) and development environments to refresh the SAP User Experience.
The first mobile apps showed promise, but as we started to roll out moreĀ and more, quality suffered. Ā The experience of some of the native apps were good, some bad. Ā We noticed a lot of creativity in building different ways to do the same things. Ā This came to a head when some of our large customers evaluated SAPās mobile app portfolioĀ as a whole and were not happy about this experience.
Design consistency was one thing. Ā Also, we considered the full lifecycleof these apps. Currently, there are over a thousand permutations of android software and hardware configurations in the market today. Ā Even Apple now has several versions of screen sizes and resolutions to support from tablets, phones, and now watches. Ā CostĀ of development, support, and ownership pointed to a modern, but scalable approach. Ā We made a decision to go with a responsive HTML5 approach.
Luckily, SAP had been developing HTML5 controls at that time. Ā As withĀ other HTML5 libraries at the time, UI5 was separated between the desktop controls and the mobile controls. Ā We took the decision to combine the best of what we had and create a responsive UI5 control set for Fiori.
I may have understated the part about our customers being unhappy aboutĀ the user experience. Ā It was escalated to the highest levels and we were under tremendous pressure to demonstrate to customers that this new concept would fly ā quickly. Ā We had 6 days, 144 hours to be exact, to demonstrate to internal stakeholders the bothĀ the desirability and feasibility of our approach. Ā Iāll never forget Stefan Beck and the UI5 team walking down the halls of Walldorf to ourĀ war roomĀ saying that, āthe UI5 team will support you.ā
That was the beginning. Ā Since then UI5 and the team behind the technologyĀ have expanded much beyond a mere set of controls. Ā The team has helped to develop a programming model that is open and designed to scale for the enterprise. Ā It is part of growing set of tools to make UI development both efficient and scalable both for SAP and the industry.
Looking forward weāll start to augment our responsive design approachĀ to also leverage native, on-device capabilities. Ā Analytics will become more of an area of focus. Ā I have said many times that I feel my role as Chief Design Officer is to change the perception of SAPās user experience. Fiori has done much for SAP to startĀ that perception change, but I am acutely aware that we are only just beginning on our journey. Ā I also feel that SAPās journey is also the same journey that the entire IT industry will need to follow to bring great experiences to our users.
]]>Itās been more than a couple of years since I first had a look at XML data in the context of UI5. In my āRe-presenting my site with SAPUI5ā video I used an XML Model to load the XML feed of my weblog into a UI5 app (gosh, JavaScript views!).
The XML Model mechanism proved very useful this week on a project, and I thought Iād re-examine some of its features. Everyone knows about the JSON and OData Model mechanisms; at least in my UI5 conversations, I donāt hear folks talk about the XML Model much. So I thought Iād give it some love here.
The API reference documentation for the XML ModelĀ is a little dry. As Frank Zappa once said, āThe computer canāt tell you the emotional story. It can give you the exact mathematical design, but whatās missing is the eyebrowsā. We need to look elsewhere for the emotional story, for the eyebrows; and I think a nice place might be the QUnit tests forĀ the XML Model.
Learning from the QUnit sources
Letās have a look at the source, and see what we can learn. There are actually a couple of QUnit test files; weāll have a look at just oneĀ of them ā XMLModel.qunit.html. Weāll just examine the setup and a couple of testsĀ to see what we can find ā what we can expect to be able to do with an XML Model. You can explore the rest of the QUnit test files on your own.
At the start of XMLModel.qunit.html, a couple of XML Models are instantiated with some test data as follows:
var testdata = "<teamMembers>" +
"<member firstName=\"Andreas\" lastName=\"Klark\"></member>" +
"<member firstName=\"Peter\" lastName=\"Miller\"></member>" +
"<member firstName=\"Gina\" lastName=\"Rush\"></member>" +
"<member firstName=\"Steave\" lastName=\"Ander\"></member>" +
"<member firstName=\"Michael\" lastName=\"Spring\"></member>" +
"<member firstName=\"Marc\" lastName=\"Green\"></member>" +
"<member firstName=\"Frank\" lastName=\"Wallace\"></member>" +
"</teamMembers>";
var testdataChild = "<pets>" +
"<pet type=\"ape\" age=\"1\"></pet>" +
"<pet type=\"bird\" age=\"2\"></pet>" +
"<pet type=\"cat\" age=\"3\"></pet>" +
"<pet type=\"fish\" age=\"4\"></pet>" +
"<pet type=\"dog\" age=\"5\"></pet>" +
"</pets>";
setXML and setData
The XML data is added to the XML ModelsĀ with the setXML function:
var oModel = new sap.ui.model.xml.XMLModel();
oModel.setXML(testdata);
sap.ui.getCore().setModel(oModel);
var oModelChild = new sap.ui.model.xml.XMLModel();
oModelChild.setXML(testdataChild);
This is different to the setData function, which is also present on the JSON Model, with an equivalent semantic. Here in the XML Model, the setData functionĀ would be expecting an XML encoded data object, not a literal string containing XML.
As an example, if we have a variable containing some XML string like this:
xmlstring = "<root><name>DJ</name></root>"
then we could either set it on anĀ XML Model with setXML, like this:
m = new sap.ui.model.xml.XMLModel()
=> sap.ui.dā¦e.C.eā¦d.constructor {mEventRegistry: Object, mMessages: Object, id: "id-1438428838337-6", oData: Object, bDestroyed: falseā¦}
m.setXML(xmlstring)
=> undefined
m.getProperty("/name")
=> "DJ"
or with setData, creating an XML encoded data object, like this:
m = new sap.ui.model.xml.XMLModel()
=> sap.ui.dā¦e.C.eā¦d.constructor {mEventRegistry: Object, mMessages: Object, id: "id-1438428927599-7", oData: Object, bDestroyed: falseā¦}
m.setData(new DOMParser().parseFromString(xmlstring, "text/xml"))
=> undefined
m.getProperty("/name")
=> "DJ"
A couple of tests
Then weāre off on the tests. ThereĀ are a couple of tests to check getProperty, the first using a relative context binding:
test("test model getProperty with context", function(){
var oContext = oModel.createBindingContext("/member/6");
var value = oModel.getProperty("@lastName", oContext); // relative path when using context
equal(value, "Wallace", "model value");
});
test("test model getProperty", function(){
var value = oModel.getProperty("/member/6/@lastName");
equal(value, "Wallace", "model value");
});
What we can see here already is that we can access XML attribute values (ālastNameā in this case) with the XPath @ accessor. As an aside, the use of the optional second oContext parameter in the getProperty call is something one doesnāt see very much, but is extremely useful.
Element content retrieval
The rest of the file contain a load of other tests, all useful reading material, from the rare-to-see use of the unbindProperty functionĀ to aggregation bindings that are comfortable to use.
One thing that we have to wait until test 15 to see is the use of element content:
test("test XMLModel XML constructor", function(){
var testModel = new sap.ui.model.xml.XMLModel(
);
testModel.setXML("<root>" +
"<foo>The quick brown fox jumps over the lazy dog.</foo>" +
"<bar>ABCDEFG</bar>" +
"<baz>52</baz>" +
"</root>");
equal(testModel.getProperty("/foo"), "The quick brown fox jumps over the lazy dog.");
equal(testModel.getProperty("/bar"), "ABCDEFG");
equal(testModel.getProperty("/baz"), 52);
});
Until now weāve only seen examples of XML where the data is stored in attributes. What about the more classic case of text nodes, like this example XML here?
Well, as we can see, a simple call to getProperty will do what we want. If weāre XPath inclined, we could even add the text() specification like this:
testModel.getProperty("/bar/text()")
=> "ABCDEFG"
and still get what we expect.
Ending where we started
And of course, to round things off, we can always get back to an XML encoded data object with getObject, like this:
testModel.getObject("/bar")
=> <bar>ABCDEFG</bar>
(that result is indeed an object), in a similar way to how we retrieve the whole object from the model:
testModel.getData()
=> #document
<root>
<foo>The quick brown fox jumps over the lazy dog.</foo>
<bar>ABCDEFG</bar>
<baz>52</baz>
</root>
The XML Model is a powerful ally, and the QUnit tests are a rich source of information about it. Spend a coffee break looking through the sources, you wonāt be disappointed!
]]>YesterdayĀ PeterĀ MĆ¼Ćig from the UI5 team at SAP in Walldorf announcedĀ the multi-version capability for SAPUI5.
He also documented the details in a post on the SAP Community NetworkĀ here:Ā āMulti-Version availability of SAPUI5ā. Shortly after, the announcement was also made that this would also be available for OpenUI5.
This is great news, and something that weāve been waiting for now for a while. It makes perfect sense, and the ability to select a particular runtime version via a part of the bootstrap URLās path info is very nice. Itās something I do locally on my workstation anyway, and I also have a ālatestā symbolic link that I ensure points to the latest copy of the runtime or SDK that I have locally.
Along with the announcement came a link to a simple SAPUI5 Version Overview page, built in UI5. It looks like this:
And if you look under the covers, youāll see a single-file app, with a lot of custom CSS, some JavaScript view stuff going on, and the retrieval of a couple of JSON resources containing the version overview infoĀ and the data from the neo-app.json file that is present in the HCP platform and which describes routes to destinations, which include the SAPUI5 runtime services, now available at different paths for different versions.
Youāll also see some complex manipulation and merging of those two datasets, and the mix of UI5 controls with raw HTML header elements.
The result is an app that looks OK on the desktop but doesnāt look that well on a smartphone, as you can see above.
So I spent some time on the train down from Manchester to London early this morning to see what I could do.
I wanted to address a couple of things:
The UI part was straightforward. I used my MVC technique (see MVC ā Model View Controller, Minimum Viable Code from earlier in this series) to define a new View, declaratively in XML. I used an App control with a couple of Pages, and a simple controller for the view which handled all the view lifecycle and user-generated events, as well as being the container for the formatter functions.
I also used some of my favourite JavaScript functions to bind together the disparate data into a nice cohesive single array of maps. I left the original data manipulation as it was, and then grabbed what it produced to make my array. I could then bind the List in UI to this single array, and then confer the right binding context to the second Page for a selected item from the array.
Iāve created a small Github repo ui5versioninfo with the files. It contains a local snapshot of the two source JSON files (neo-app.json and versionoverview.json), the original versionoverview.html that produces the UI we saw earlier, and a new file, called new.html, which is my quick attempt addressing those things above.
Hereās what the result looks like:
Iāve tried to use some UI5 design and control best practices,Ā while defining the UI in XML. Iāve added some functional programming style data merging to take place after the original manipulation, and a small controller with the requisite functions for event handling and formatting.
I took the screencast of the āfinishedā attempt on the tubeĀ from London Euston to Chiswick this morning, so it really was a rush job. But I think that itās worth thinking about how we can improve this useful resource. How would you improve it?
]]>Iām mid-flight in my first UI5/Gateway project, working with a great team of developers who have all contributed to this 30 Days of UI5Ā series. As a non-techie Project Manager embarking on mobile development for the first time, I thought Iād share some of my experiences and tips.
This isnāt regular SAP configuration ā this is mobile development.
My experience of SAP to date has been in software modules ā EPM, BW, CRM, for example. Itās easy to think of a UI5/Gateway project in the same way because theyāre SAP products, but putting the name to one side, the difference between enterprise software and mobile development is huge.
This is obvious to those familiar with mobile development, but not so obvious to those new to this area. Prior to embarking on any UI5 project, get hold of case studies, project plans, artefacts, lessons learned and people that have delivered UI5 applications to get an understanding of how to set your project up for success. If youāre experience is largely in enterprise software projects, this is going to be very different :-).
Nail your branch & review strategy early on
At the beginning of the project, work with your team to develop a branch & review strategy. Agree a process for matching short-lived feature branches to tasks, forĀ reviewing code prior to any merging, and also ensure that development branches are tidy and up to date ā that is, delete any old branches, or branches that are no longer needed.
A friend told me a story of a time when he was working in a fast moving and experienced frontend/toolkit development team. Heād had a scattering of branches lying around his local repo, and a Ukranian colleague, in a thick accent, speaking German, reprimanded him gently but firmly: āWhat are all these branches doing clogging up your workspace and your brain? Get rid of them!ā.
Allow enough time for planning ā agile doesnāt excuse poor process
I found itās easy to run in to development with a bunch of user stories and little else. Although UI5 lends itself to agile development, there still needs to be adequate time allocated to planning sprints, and also fundamental architecture design, not only for the UI itself, but for the data design and integration. Factor this in to your plans from the very beginning and donāt budge ā if anything should give as a result of time, cost or scope constraints, it mustnāt be the preparation that goes in to making sprints a success.
Embrace it
Working with UI5, Iāve discovered methods of project delivery that are entirely different from the standard Waterfall/ASAP approach so often adopted in enterprise software projects. Iāve also found it hugely rewarding to see an intuitive, easy to use application come to life and, more importantly, so has our customer (case study is in progress ā more on that soon).
So in summary, my advice is:
Iād be really interested to hear about other experiences of managingĀ orĀ coordinating UI5/Gateway projects, and listen to any advice you may have.
]]>It was in the spring of 2012 when I wrote this piece aboutĀ the new kid on the block, SAPUI5:
SAPUI5 ā The Future direction of SAP UI Development?
The fledgling toolkit had been released at version 1.2 earlier that year, and while it hadĀ clearly been in gestation for a while inside SAP, it was still new and raw enough to make folks wonder what it was all about. More than the newness or the rawness was how it was different, how it changed the rules. And what made it even more interesting was the fact that while SAP had changed a lot of rules since the 80s, this time, it was SAP embracing common practices and growing standards outside its own development ecosphere. And that was a good thing.
So SAPUI5 was not just a toolkit, it was more than that. It was arguably the poster child for how SAP was changing, changing to embrace, adopt and build upon open standards and protocols.
Of course, that had been happening for a while, most notably, at least in my opinion, by the introduction of the Internet Communication Manager (and corresponding user-space Internet Communication Framework) to the R/3 architecture, allowing SAP systems to speak HTTP natively. And there was OData, which SAP adopted as a REST-informed protocol and format for the next generation of business integration. It had been a long time coming; the journey from service-orientation to resource-orientation starting from the mid 2000ās āĀ with posts like this: āForget SOAP ā Build Real Web Services with the ICFā :-) ā was long and arduous.
So it was met by some with trepidation, wonder, cynicism even. But the rise and rise of UI5ās success has been undeniable. Success not only in becoming the engine powering the first SAP Fiori UX revolution, but also in the move towards a more open and outward facing development approaches.
The UI5 teams of designers and developers themselves, in Walldorf and around the world, have open software and standards in their DNA. UI5 itself has been open sourced. The development standards and processes are, out of necessity, different. And we can see that first hand. Just look at theĀ home of UI5 on the web ā at https://github.com/SAP/openui5.Ā Github!
The development process is there for us to see, warts and all. The smallest changes are being done, in public. Look at the one authored 11 hours ago in this screenshot. Itās a simple improvement for variable declaration in the code for the Message Popover control in the sap.m library. It doesnāt matter what it is, what matters is that itās open. For us all to see, scrutinise, and most importantly, learn from.
UI5 powers SAP Fiori, the services of their cloud offerings (for example in the form of the Web IDE, written in UI5) and of course the S/4HANA business suite. Itās destined toĀ become a part ofĀ the future normal. Itās a toolkit with a strong pedigree, a toolkit that is not perfect (I canāt think of any software that is) but a toolkit with passionate folks behind it. Itās gaining some adoption outside of the SAP ecosphere too, and in some cases is almost becoming part of the furniture ā not the focus of energy, but the enabler of solutions. It Just Works(tm) and gets out of the way. That for me is a sign of growing maturity.
]]>A few months ago a preview release of 1.28 was made available. In the blog post that accompanied it, a number of the new features were introduced.Ā Without much fanfare, certainly without any cool looking screenshots, the experimental āClientā operation mode was announced for the OData Model mechanism.
OData Model ā Server-side
The OData Model is special, in that it is classified as a server-side model, unlike its client-side siblings such as the JSON Model or the XML Model (or the Resource Model, for that matter). This means that the data āhomeā is seen as the server, rather than the client (the browser). Consequently, any operations on that data, even read-only operations such as sorting and filtering, take place on the server. That means extra network calls. There are truly marvellous advantages also, which the margin [of this post] is too narrow to contain.Ā
There are circumstances, even when dealing with entity sets in OData services, where sorting and filtering could and should take place on the client, rather than on the server. To this end, 1.28 brought an initial experimental feature to the OData Model mechanism ā the OData Operation Mode.
Operation Mode
The Operation Mode joins a small but important set of modes relating to the OData Model mechanism. By default, the Operation Mode is āServerā. But it can be set to āClientā, which causes all data to be loaded from the server and for subsequent sorting and filtering operations to be performed in the client, without further network calls. As the blog post mentions, this only really makes sense as long as there isnāt a ton of data.
Note that the Operation Mode is related to the OData Model mechanism instantiation in that it is the default for that model instance. You actually specify the mode for a binding, as shown in the code snippetĀ in the blog post:
oTable.bindRows({
path: "/Product_Sales_for_1997",
parameters: {
operationMode: sap.ui.model.odata.OperationMode.Client
}
});
Experiment!
This experimental feature was crying out for ā¦ well, experimentation. So I threw together an MVC (model view controller, minimum viable code) based app to test it out. Hereās the result:
Here we have a test app with a List, where the items aggregation is bound to the Categories entity set in the public Northwind OData service at http://services.odata.org/V2/Northwind/Northwind.svc/.
Note that the Operation Mode is only available on the v2 version of the OData Model mechanism, so thatās what Iām using here.
Initially the binding to the Listās items aggregation is with the (default) value of āServerā for the Operation Mode. You can see the network calls that are made to ask the OData service to return the entities in a specific order (with the $orderby OData parameter) each time I hit the sort button, which is toggling between ascending and descending sorting of the category names.
But then, in the console, I grab the List, and re-bind the items aggregation, to the same path (ā/Categoriesā) but in āClientā Operation Mode.Ā The result is that a new call is made to fetch the entities to satisfy that (new) binding, but further sorts are done entirely on the client ā there are no more network calls made.
Iād call that experiment a success, and Iām looking forward to developments in this area. Nice work, UI5 team!
]]>If youāve followed this series youāll have come across the OpenUI5 Walkthrough, a āa great multi-step walkthrough of many of the features and practices of UI5 developmentā.
In Step 5 of the walkthrough, on āControllersā, weāre introduced to something that looks unfamiliar. Especially to those who have written large numbers of controllers thus far, for example. The way the XML Viewās Controller is defined is ā¦ different. Step 5 doesnāt say much specifically about how this works, but Step 6, on āModulesā, does.
This is what the Controller source code looks like:
So whatās happening here?
Well, whatās happening is that weāre seeing the beginning of a migration to an Asynchronous Module Definition (AMD) style mechanism. And the principle vehicle for this is a new function sap.ui.define, which was introduced to the world in 1.28 (1.27 internally).
Thereās already someĀ API documentation for this experimental new way to define modules that you can read in the API reference guide for sap.ui.define itself. There youāll see how thereās a transition planned away from synchronous, and towards asynchronous loading. Youāll see for example that the optional fourth parameter ābExportā of sap.ui.define is there to support that transition.
While thereās plenty to read there, letās just take a quick look at what it means for those like us at the UI5 coalface. Weāll take the code in the screenshot above as an example:
Instead of calling something like this ā¦
sap.ui.core.mvc.Controller.extend("your.name.here", {
// your controller logic here
});
ā¦ we can use the new more generic sap.ui.define to first of all declare dependencies and then define the factory function that becomes the controller, in this case. Letās take a look at the code and examine it line by line:
1-14: The call to sap.ui.define extends across all the lines here; and we can see that out of the four total possible parameters described in the API reference, only two are used: the optional list of dependencies (represented here by the array) and the factory function that has a single statement returning an extended controller.
2-3: These are the dependencies. Weāre defining a Controller, so weāll want to extend UI5ās core controller (in the same way that we often do, such as in the example earlier). For that, we have a dependency on sap.ui.core.mvc.Controller. Weāre also using the Message Toastās āshowā function, so we declare a dependency on sap.m.MessageToast. Note that the dependencies are expressed as resource paths (with the .js suffix omitted of course).
4: The second parameter passed in the call to sap.ui.define is the factory, and we can see the function definition start here. Note that each dependency reference is given to this factory function, in the same order that theyāre declared in the dependency list. By convention, the most significant part of the resource path name is used for the parameter name (for example āControllerā for sap.ui.more.mvc.Controller).
5: The call to āuse strictā is not specifically a feature of the new module definition syntax, but it is significant in that there is growing focus on JavaScript syntax correctness and linting. For more on this, see another post in this series: āUI5 and Coding Standardsā.
7-12: The rest of the source code looks fairly familiar. Thereās one exception though, and itās a result of the dependency mechanism described earlier. The function has āControllerā and āMessageToastā available to it, and so we can and should use these to refer to the sap.ui.core.mvc.Controller and sap.m.MessageToast resources throughout. This is nice, and makes for slightly neater code too.
Itās early days for the new define mechanism, and thereās clearly a journey ahead for those in the core UI5 team looking after fundamental module and dependency loading and management mechanisms. But even at this early stage, itās worth paying attention to the direction UI5 is going in this regard, and start to experiment. I know I will be!
]]>UI5ās support for multiple-languages, out of the box (see the post āMulti-language support out of the box ā UI5ās pedigreeā in this series) is impressive and easy to use. Creating a message resource bundle in your Component.js file is straightforward, especially if picking up the userās language preferences in the browser.
What can be less straightforward though is organising these files into something manageable, for plenty of projects, your i18n file might be on the small side, but itās pretty easy to build up a large file. An application Iām currently working on, which perhaps has only 50% of its views defined, already has just 100 definitions in the i18n file. (A quick look at the Fiori My Travel Expenses App v2 shows there are around 1000 lines, and about 500 definitions in the resource file and whilst reasonably well documented with comments ā you may well be hunting for usage of a text).
#XBUT,20: Button that distributes (shares) the total amount evenly between all attendees
DISTRIBUTE_EVENLY=Distribute Amounts Evenly
#XBUT,20: add internal attendee button
ADD_INTERNAL_ATTENDEE=Add Internal Attendee
#XBUT,20: add external attendee button
ADD_EXTERNAL_ATTENDEE=Add External Attendee
#XFLD,20: FirstName ā LastName in the right order, e.g. EN: Smith, John
ATTENDEE_FULLNAME_ARTIFACT={1}, {0}
#XTIT: title of Add Internal Attendees select dialog
INTERNAL_ATTENDEES_TIT=Add Internal Attendees
#XTIT: title of Add External Attendees dialog
EXTERNAL_ATTENDEES_TIT=Add External Attendees
Example of a Fiori Resource Model file from āMy Travel Expensesā
Before we dive into the structure of the key value pairs of the file though, itās worth thinking about if one file for all your texts makes sense. In the majority of cases, you really wouldnāt want to add further complexity by adding more files. in my experience though, there are some cases where creating additional resource files may be useful.
As we move on into the structure of these files, it might not seem to be important (you can always search for a term in your chosen IDE after all), but like all good coding practices, structure can be immensely helpful in the following regards
How you choose to organise the language file is a matter of preference, however in my experience there are two key things I like to highlight and organise in the language file:
Iāll define all the common terms at the beginning of my language file. My preference for all my keys, is to use a dot notation to specify them (as it lines up nicely with identification of components). So hereās an example
#Common Terms
common.thing=Foo
common.items=Items
common.add=Add
...
Thing is though, common terms feel like something I should have in my application; you want to make sure that when you call a thing, Foo
itās always a Foo
and when itās requested to change to Bar
I can change the common term, and my job is done. In practice though, this never really works.Ā Why? Well I might be able to define those common terms, but in the majority of cases I always need to fit them into a longer text, such as Create a Foo
or Delete Foos
.
OK so maybe I can define some common texts, and do some clever pre-processing with Grunt to expand placeholders in my text, or do the same when I load the resource file
#Specifc Terms (pre-process)
master.things.addThing=Add {common.thing}
master.things.deleteThings=Delete {common.thing}s
#Specific Terms (post-process)
master.things.addThing=Add Foo
master.things.deleteThings=Delete Foos
Nice?Ā Well not really, itās not a great practice to make longer texts out of shorter texts. ConsiderĀ the need to correctly handle plurals or other modifications you might require. Letās say we can have Foos
but itās not Bars
but Baren
then my nice easy change above isnāt going to work. Then other languages might not have the same syntactic structures, and I could finish up chasing my tail trying to get it right across all languages, or finishing up like those pre-recorded train announcements made up of single recorded chunks ā they work, but just sound awful.
There is one valid place for common terms, and you might therefore still want to define them in your main i18n file (or even a separate one you donāt load). Thatās as a glossary to help those maintaining the file. Adding common.thing=Foo
to the head of the file, even if never used will help those coming along after to understand how things are referred to. Itās a good UX practice, and fundamental to building a consistent experience.
So most of my definitions though, will be very specific to a view or fragment, and therefore, I like to identify these, in this manner, with the application as an implied root. If Iām developing a Split App, which has for example the following views
then Iāll structure my language file, very specifically to reference the view, the control(s) in the view, and where appropriate the property. Which might result in something like.
#Master views
#Tasks
master.tasks.title=Maintenance Tasks
#Services
master.services.title=Active Services
master.services.toolbar.button.add=Add Service
master.services.toolbar.button.delete=Delete Services
#Detail Views
#Rounds
detail.rounds.title=Round Definition
detail.rounds.tabBar.tab.details=Details
detail.rounds.tabBar.tab.vehicles=Vehicles
#Rounds / Vehicles fragment
detail.rounds.fragment.vehicles.column.title.service=Service
detail.rounds.fragment.vehicles.column.title.capacity=Capacity
...
Admittedly this is quite a verbose approach, and it requires a little discipline to use, but the advantages are plain to see ā I immediately get a sense of where a text might appear in the user interface, I also can get a sense of if anything is missing (for example Iād expect every view to have a {view}.title
attribute.
By taking a structured approach to the language file, it also makes it easier to set up controls with bindings to the language file, as there is no need to try and think of a name. It goes without saying that you should be building your xml views with bindings for texts from the very start of development ā no one wants to go back and to add them all in at a later date (if you do, thatās precious velocity youāre wasting).
Who thought such a straightforward flat structured concept could require so many considerations?
]]>Whilst recently developing a custom UI5 app with an SAP PI backend, I came across some useful mechanisms. My aim was to merge 2 sets of data from 2 service calls into an Object List Item.Ā Ā Having already bound one set of data my XML View my initial thought was to perhaps use a formatter and pass in 2 arrays of objects and manipulate the data within the Formatter.js file. As you probably guessed, this simply didnāt work, I should mention that both service calls return data in a JSON format rather than standard OData. My next approach was to manipulate the 2 arrays in the Viewās controller and merge them both into a new sorted array assigning it to the Componentās Model. One of the benefits of doing this is that you can define your own attribute names and data which is then globally accessible within the app.
Using the code below you can specify a path to set your new data:
this.oModel.setProperty("/newPath", mergedArray);
One other related issue was the searching of specific object attributes within an array of objects. The context of this search was to allow a user to select an item from an Object List ItemĀ and load additional data in a new View. Having already passed the relevant parameters within my Router it wasĀ jQueryĀ to the rescue. The jQuery.grep function allows you to perform wildcard search on an array of attributes without the need to manually loop through each element. By passing an array as an argument a test against a defined index is performed returning all the entries that satisfy the function as a new array.
var aResult = $.grep(dataArray, function (e) {
return e.attributeName.indexOf(("searchAttribute",) == 0;
});
One last interesting mechanism used was the storing of hidden Custom Data objects within an XML View. Using the following:
xmlns:app="http://schemas.sap.com/sapui5/extension/sap.ui.core.CustomData/1"
<ObjectListItem
title=āListā
app:key="{hiddenKey}" />
you can access the Custom Data object using the data() method within the Controller of your View.
For additional information checkout the following links jQuery.grep and CustomData objects.
]]>Photo by Janina Blaesius
If youāre reading this post, or this whole series, itās very likely that you already know something about UI5. Whether thatās coming from the SAP enterprise angle with the SAPUI5 flavour, or from the Open Source angle with the OpenUI5 flavour. But there are plenty of other souls out there that are still missing the UI5 salvation :-).Ā And so I thought Iād briefly review the sorts of activities that have been happening over the last couple of years as far as evangelism, education, and advocacy are concerned.
This is very timely, as this yearās OSCON has just finished in Portland, and a couple of UI5 team members Janina Blaesius and Michael GrafĀ were there with a session on OpenUI5 : āNo more web app headachesā. Good work folks! OSCON is OāReillyās Open Source Convention, a venerable conference that Iāve been lucky enough to attend and speak at on and off since 2001. Last year, I co-presented a tutorial session on OpenUI5Ā at OSCON with Andreas Kunz and Frederic BergĀ ā two more heroes from the same UI5 team as Janina and Michael.
Not only that, but the great news is that at the EU version of OSCON, taking place in Amsterdam in October this year, thereās another session āDonāt Disconnect Me! The challenges of building offline-enabled web appsā by another mighty UI5 team combo ofĀ Christiane Kurz andĀ Matthias OĆwald. Awesome!
And even if you omit the usual suspect conferences such as SAP TechEd, thereās plenty more, far too much to list in this single post. But hereās a quick selection:
FOSDEM :Ā OpenUI5 at FOSDEM 2015
Mastering SAP :Ā Speaking at Mastering SAP Technologies
SAP Arch & Dev :Ā Speaking at the SAP Architect & Developer Summit
Fluent :Ā OpenUI5 at Fluent Conference 2015
JSNext : OpenUI5 at JSNext Bulgaria
DevoxxUK : DevoxxUK ā One does like to code!
SAP Inside Track : SAP Inside Track Manchester,Ā SAP Inside Track Sheffield ā UK
SAP CodeJam :Ā SAP CodeJam Liverpool ā OpenUI5
Bacon : OpenUI5 at BACON Conference
In September thereāsĀ another SAP Inside Track in Sheffield, where there will be talks on UI5 of course (well, Iām going! ;-) and a whole second day dedicated toĀ learning and hacking with UI5.
Iām sure Iāve missed out some UI5 activities, so please let me know of others that have happened. And perhaps more importantly, let me know of any that are coming up, especially any that youāre planning, and I can add them here. Share & enjoy!
]]>In an earlier post in this series, MVC ā Model View Controller, Minimum Viable Code, I showed how you could write a single-file UI5 app but still embrace and use the concepts of Model View Controller, having separate controller definitions and declarative XML Views. I also mentioned you could use XML Fragments in this way too, and Robin van het Hof asked if I could explain how. So here we go, thanks Robin!
If we take the code from the previous post and run it, we end up with a UI that looks like this:
Letās add some behaviour to the Button so that it instantiates and opens a Dialog control. Weāll define this Dialog control in an XML Fragment.
In the same way that we defined the XML View, weāll define the XML Fragment inside a script element, this time with a āui5/xmlfragmentā type, like this:
Itās a standard XML Fragment definition, and even though it only contains a single root control āthe DialogĀ āĀ Iām using the Fragment Definition wrapper explicitly anyway (as I think itās good practice).
When we press the Button, we want this Dialog to appear, like this:
So letās rewrite the handler āonPressā which is attached to the Buttonās press event, so it now looks like this:
This is a common pattern for fragments, so letās examine the code line by line:
48: Weāre going to be storing a reference to the Dialog fragmentās instance in a controller variable ā_oDialogFragmentā so we declare it explicitly, mostly toĀ give those reading our code a clue as to our intentions.
51-56: Ensuring we onlyĀ instantiate the Dialog once, we use the sap.ui.xmlfragment call, with the fragmentContent property, passing the content of the fragment script with jQueryās help (remember, the name of the fragment script is ādialogā). Once instantiated we add it as a dependent to the current XML View.
57: At this stage we know we have a Dialog ready, so we just open it up.
60-62: The onClose function handles the press event of the āCloseā Button in the Dialogās buttons aggregation.
And thatās pretty much it. Use script elements to embed XML Views and Fragments, and use sap.ui.xmlview and sap.ui.xmlfragment to instantiate them, with jQuery to grab the actual content.
]]>In an earlier post in this series, entitled āThe UI5 Support Tool ā Help Yourself!ā, we looked at the Support Tool, examining the information available in the Control Tree. In particular we looked at the Properties and Binding Infos tabs. While exploring the new UI5 1.30 features with the Explored app, I re-noticed a small addition to the Explored UI ā a Button that allowed me to switch to full screen mode to view control samples.
I thought it would be fun to use the Support Tool and other debugging techniques to see what was exactly happening in the Explored app when we toggled that control.
Identifying the Button
First, we need to identify the Button control ā by its ID. We can use a context menu feature of Chrome which will open up the Developer Tools: Right-click on the Button and select Inspect Element. This will show us the ID in the highlighted sections in the screenshot:
Here, the full ID highlighted is ā__xmlview2ātoggleFullScreenBtn-imgā, as we right clicked on the image part of the Button. Go up a couple of levels in the HTML element hierarchy and youāll see the button tag with an ID without the ā-imgā suffix. Thatās what we want.
Stopping at the Press Event
We could at this stage play an instant and use sap.ui.getCore().byId to get a handle on the control with this ID. But instead letās look at the Support Tool and how it can expose event breakpoints.
Opening the Support Tool, and the Control Tree section within, we can search for the ID ā__xmlview2ātoggleFullScreenBtnā. When we find it, we can switch to the Breakpoints tab and set a breakpoint for the firePress function (as that is what will be happening when we press the Button ā a āpressā event will be fired):
Now when we press the Button, we land inside Breakpoint.js:
Finding the Event Handler
Now that weāre here, thereās plenty to explore, but letās cut to the chase and look at the Button instance. In particular, weāll look at an internal property āmEventRegistryā which is a map that holds the functions to be called when specific events are fired. Remember that this is an internal property, which we canāt use, or rely upon, when building apps (for more details on this, see the post āJavaScript Doās and Donāts in UI5ā in this series). But weāre not building, weāre debugging, so all bets are off.
The āthisā here is the Button control instance, and so we can see that the āthis.mEventRegistryā map has an entry for āpressā:
this.mEventRegistry
-> Object {press: Array[1]}
Looking at this single entry in the array for the āpressā event, we can see that the function handler is in a controller (surprise surprise):
this.mEventRegistry["press"][0].fFunction
-> sap.ui.controller.onToggleFullScreen(oEvt)
Unless youāre using an older version of Chrome, you should be able to click on the function name to bring you to the āonToggleFullScreenā function definition:
Nice!
Examining What Happens
We can now put a breakpoint on line 163 (which I had done already before taking the screenshot above) and hit continue, to be able to then step into what this function calls (the updateMode function) when the stack gets here. This is what the updateMode function looks like:
It sets the Split Appās mode to the appropriate value (the default of āShowHideModeā, or āHideModeā for the full screen effect). It also modifies the containing Shell controlās appWidthLimited property so that a real full screen effect can be properly achieved.
So thatās it! If you can become comfortable helping yourself with these tools, youāll be a better UI5 developer.
]]>The solid Model View Controller implementation in UI5 forcesĀ the separation of concerns. The logical placeĀ for models, views and controllersĀ are files, in (usually) separate folders. Views, with specific file extensions, in a folder thatās usually called āviewsā, and controllers, with specific file extensions, in a folder called ācontrollersā. And models elsewhere too.
This means that if youāre wanting to try something out quickly, and itās a little bit more than a Hello World construction, then youāre off creating files and folders from the start before you can properly start thinking about the actual app idea you want to explore.
That is, unless you use a āMinimum Viable Codeā technique. I like to think that itās due to aĀ combination of the three great virtues of a programmer (laziness, impatience and hubris) that led to thisĀ approach :-).
Creating a folder structure and getting the right files in place does not go well with the āquicklyā part of ātry something out quicklyā. Trying something out, for me, means ideally using just a single file. Itās fast, you can see everything in one place, and youāre not creating unnecessary clutter. But when I want to try something out also, I also want to ensureĀ that the code I write is clean and separated. Which for me implies declarative views and fragments in XML.
Luckily, for nearlyĀ all of the cases where Iāve wanted to try something out, Iāve found that this single file technique works well. I can have one or more views, and fragments, all declared in XML, and one or more controllers too. And within that space I can declare models too.
Hereās how it works:
And hereās an example:
Hereās a brief rundown of what you see:
9-16: This is the UI5 bootstrap, nothing unusual here
18-28: Here we have a script element that is of a made-up type āui5/xmlviewā. This could be pretty much anything, as long as the browser doesnāt try to process it. Itās a technique used in templating systems. This contains some XML, which as you can see is a small but perfectly formed view definition (which incidentally conforms to the UI5 Coding Standards explained in a previous post in this series.
31-36: This is the local controller definition, which is referenced in the Viewās controllerName attribute (in line 20). It has the onPress handler for the Buttonās press event.
38-40: This is the startup code. It instantiates the XML View, getting the value for the viewContent property via jQuery from the script element we saw earlier, and then simply places that View in the body, via the ācontentā ID, as usual.
And thatās pretty much it. You can add as many views as you want using the script element technique; I also use this technique for fragments too, and specify a made-up type of āui5/xmlfragmentā instead.
Itās a great way to write simple one-file applications, especially for prototyping. I have written snippets to help me with this. I have been somewhat fickle when it comes to editors, so have left a trail of semi-finished snippet libraries for Sublime (SublimeUI5) and Atom (ui5-snippets), but have finally come full circle to my first love, vim. Hereās a quick screencast of using my snippets in vim (powered by UltiSnips) to create aĀ Minimum Viable Code MVC style app:
]]>DJ kindly asked me to write a blog for his 30 days of UI5 series to celebrate version 1.30 of UI5. My immediate reaction was what, me, what do I have to add to this subject?
I then realized that I had a little part in the UI5 story thus far and folks might enjoy the story, and the update, of how Fiori became Freeori.
It all started with a late night phone call with Den Howlett, where we discussed, as we sometimes do, the state of the SAP Union.
We became engrossed in a conversation about the new Workday version that had just been released, with its responsive and modern user experience, and wondered how SAP could compete in UX quality. To that end, SAP had recently released Fiori but adoption was poor with just a handful of users navigating the complex licensing policy around it.
At that time, it was necessary to pay for core SAP licenses, Gateway integration licenses and separate Fiori licenses. There were some pros to this ā the paid nature of Fiori meant that Fiori was getting development dollars, but it wasnāt getting adoption. Without adoption of a modern user experience, SAP would be in trouble in the mid-term.
From that conversation came a blog Should SAP Fiori be Freeori?Ā which framed the conversation in a way we believed SAP would understand. That was key to our argument ā we believed that Fiori was the solution to renovating the SAP user experience and that charging for it would risk SAPās long-term future.
What happened next?
Geoff Scott, CEO of ASUG chimed in withTime for a UX Revolution, Not Evolution and then Chris Kanaracus, that time at IDG, now working at ASUG, continued the discussion withĀ SAP users rattle sabers over charges for user-friendly Fiori apps and did a fantastic job of rallying the user groups and getting great quotes from the ecosystem:
"DSAGās position is clear. We say [Fiori] must be part of standard maintenance" ā Andreas Oczko, DSAG Vice Chairman
"In a cloud world, youād expect Fiori to be part of the upgrade cycle" ā Ray Wang, Constellation Research
Dennis then put the hammer in withĀ The SAP Fiori or Freeori discussion heats up, comparing a potential $5-700m one time sale with Fiori to the risk of losing lucrative support revenues.
IĀ received a few back-channel messages about this, suggesting that things would move, and sure enough, SAP opened up the Fiori product to all customers at no charge. What incredible news.
What does this look like one year on?
The concern my colleagues at SAP had was that charging for Fiori ensured that there was attention to the development. However the reverse hasnāt caused an issue. On the contrary, Fiori has more investment and Sam Yenās User eXperience group have gone from strength to strength.
SAP S/4HANA has been released and Fiori is at the center of the user experience. The core UI5 and Fiori technologies have significant investments and with UI5 1.30 we see new functionality ā just check out the release notes to see the extent! They include a focus on performance improvements and new page styles.
Personally,Ā Iām incredibly proud of the individuals in the SAP ecosystem who have worked on this.Ā The impact of renovating the SAP User eXperience shouldnāt be underestimated.
]]>At one end of the spectrum, coding standards can be regarded as essential. At the other, theyāre the subject of many a passionate debate, second perhaps only to the Vim vs Emacs editor wars.
Iāll provide some caution by starting with one of my favourite quotes from Andrew Tanenbaum:
āThe nice thing about standards is that there are so many of them to choose fromā.
Use of standards
As software projects scale up, coding standards make more and more sense. On a recent run, I listened to theĀ JavaScript Jabber podcast āJSJ ESLint with Jamund Fergusonā. There was a great discussion about ESLint, and it was interesting to see the different perspectives on imposed coding standards, from āit restricts my freedom of expressionā to āit makes teams more efficient as they work more as oneā. I think those two perspectives slot roughly onto the scale spectrum. If itās just you developing, then by all means use whatever style you feel like using. But if youāre part of a larger team whose members have to work with each otherās code, imposed coding standards do make a lot of sense.
The OpenUI5 project has some coding contributionĀ guidelinesĀ as well as ESLint rules, well worth checking out, and pretty important if you want to contribute to UI5. Itās also worth considering them for your own UI5 applications. One advantage of adopting the OpenUI5 projectās guidelines and rules is that when you cross the path from your codebase into the underlying UI5 toolkit, the transition wonāt be as jarring.
Example XML View
The ESLint rules, and ESLint in general would cause this post to be a lot longer than I want, so instead Iāll look at some non-JavaScript conventions that I like to try and impose, at least upon myself. In particular Iāll look at the style for XML View definitions. Hereās part a sample XML View, which Iāll use to illustrate the style for which I strive. Note that the āĀ»ā character represents a tab (I have the list mode turned on in my editor to show invisibles).
In the following, each prefix represents the line number(s) to which Iām referring.
1: The correct namespace for aĀ View is āsap.ui.core.mvcā, not āsap.ui.coreā as you might have seen in older documentation and code examples.
2: The controllerName attribute should be the first attribute for the View element. If there is no controller then obviously this attribute wonāt be present. It just makes it slightly quicker to look for the controller reference if itās going to be consistently in the same place.
3-5: All the namespace declarations should be in a contiguous chunk. There are other attributes that might appear for a View element, thatās fine, as long as theyāre not interspersed amongst the namespace declarations. Ensure any other attributes appear before the namespace declarations. Also, donāt specify a namespace declaration unless youāre going to use it. (In this example, Iām using all of them; you just canāt see the use of the ācoreā here as itās on line 60, not in the screenshot).
5: The default XML namespace for any given XML View should be the one that is dominant in the file, or āsap.mā. If youāre building responsive UI5 apps, youāre going to need a good reason for āsap.māĀ not to be the dominant library. Also, it should be the last attribute in the View element, with a closing angle bracket directly following. (Unlike the use of other angle-bracket powered markup (such as HTML) in UI5, this is a rule that can be applied consistently. With the UI5 bootstrap in HTML, I like to have the closing angle bracket on a separate line, in the āprefix-commaā style from ABAP and other code, so I can add further data attributes without causing diff confusion.)
1-5, 6-7, 11-13 etc: All attributes should appear on lines of their own, indented appropriately.
13: When closing an element directly (like this:
8, 17, 22, 29, etc: All aggregation elements should be used explicitly. Donāt omit implicit default aggregations for controls; instead, specify them. In this example, Iām using a sap.m.Page control, with the āsubHeaderā and ācontentā aggregations. While the āsubHeaderā aggregation must be specified explicitly anyway, the ācontentā aggregation is default and doesnāt need to be, but I do anyway. The same goes for the sap.m.List controlās āitemsā aggregation.
18: Contrary to the rule about attributes being on their own separate lines, thereās an exception to this which is for the id attribute. If it exists, put it on the same line as the opening part of the controlās element.
25: Unless thereās a good reason not to, create the names for your event handler functions using an āonā prefix (like here: āonSelectā). This way theyāre consistent with the builtin view lifecycle event functions such as āonInitā.
31-34: When writing complex embedded binding syntax, put each property of the map on a separate line, in the same way youād write a map in JavaScript. Use spaces before and after the colons.
1-34:Ā Use double quotes throughout; the only place youāll then use single quotes is within embedded binding syntax.Ā Also ā¦Ā I know this is the subject of much debate, butĀ theĀ OpenUI5ās project standard specifies tabs for indentation. It came as a shock to me at first, but I have now embraced it :-)
Conclusion
I have no doubt caused some outrage to some of you, but hopefully just as much agreement with others. For me, this sample XML View is easy to read, a lot easier than some of the Fiori views that are generated from templates, for example. What are your standards?
]]>Building anything but the most trivial native apps (thatās web native, of course) is not an easy ride. There are so many factors to get right. Debugging one of these apps can be just as tough.
The UI5 toolkit supports many features that make building and debugging easier. One of these is the support for the separation of concerns in the form of Model-View-Controller (MVC) mechanisms. Another is the ability to use a declarative approach to define your views (no moving parts), in XML, HTML or JSON; furthermore, you can use the subview and fragment concepts to divide and conquer complexity and embrace reuse.
The Support Tool
The particular feature I wanted to talk briefly about in this post though is the Support Tool, alternatively known as āUI5 Diagnosticsā, or even āthe claw hand thingā. This last nickname comes from the fact that you invoke the support tool from a challenging key combination: Ctrl-Alt-Shift-S.
Thereās also the Support Toolās little brother, invoked with Ctrl-Alt-Shift-P, which is a model popup giving you a summary of the runtime context, and giving you the chance to turn on some debugging information.
You can see a shot of this here. (You can also turn on debugging via a URL query parameter sap-ui-debug=true.)
Sometimes this is all you need, especially if you want to see the UI5 version in operation, or turn on debug sources.
But the Support Tool is a super, multi-faceted mechanism which has proved invaluable over the years. It sports a large number ofĀ features, too many to cover here, so weāll just have a brief look at one of them (arguably the most important) ā the Control Tree:
On the left hand side thereās a super useful display of the appās control hierarchy. This alone is worth the cost (ahem) of the Support Tool.
Imagine being able to peer into the internal structure of a building, or having X-Ray SpecsĀ and being able to see your skeleton, or sitting in front of a monitor in The Matrix and seeing the world behind the curtain. This is what you get with the Control Tree.
UI5 apps can have complex UI structures. Fiori apps especially so. Controls within controls, wheels within wheels in a spiral array, a pattern so grand and complex. Ā With the Control Tree you can see and grok this structure very quickly. Note you can viewĀ at a glance what the control actually is, what it contains & what contains it, and what its ID is.
But thatās not all. On the right hand side, for a selected control (for example the Page control in the screenshot above), you can see all the properties of that control and from where in the control inheritance they come. You can modify the values for those properties and see the effect immediately, and even set breakpoints for each time the value for a particular property is read (G ā get) or written (S ā set).
Select the Binding Infos tab and see what bindings exist. You can see information on what model a binding is from, what type of binding it is, and of course the binding path. Here we can see some of the binding info for the List control in an app:
For youĀ eagle-eyed readers, the model name here ā āentityā ā is the name of the domain model in this app example. Often the domain model is the unnamed model, but here it has a name. Can anyone guess what this (publically available) app is?
Thereās so much to discoverĀ with the Control Tree and the rest of the Support Tool, I recommend you hit Ctrl-Alt-Shift-S the next time youāre running aĀ UI5Ā apps, and start exploring.
Finishing off
Letās finish this post off with a quick piece of trivia and a tip. If youāve had the Chrome Developer Tools open and then open the Support Tool, youāll notice a ton of new messages in the console, and itās a lot more verbose. This is because when the Support Tool starts up, it sends a message to the logger to crank the logging output up to 11.
By default, the log level is set to 1 (āERRORā). If youāre running the app with debugging on, that level is 4 (āDEBUGā). But opening the Support Tool causes this to be set to 6 (āALLā). You can turn that down again with the jQuery.sap.log.setLevel function. Otherwise, LOG ALL THE THINGS!!1!
]]>In thisĀ postĀ weāll be looking at how you can speed up the load times of your UI5 applications by using a Component preload file. Those of you who are familiar with SAP Fiori applications will probably already know what a Component preload file is, however those of you who arenāt will almost definitely have all seen a reference to this file before. This file is referenced in an error message that appears in the console whenever you load a UI5 app, which is lacking a Component preload file.
So just what is this preload file and why should I care?
The preload file is essentially all of the files which make up your application, so thatās the Component itself, Controllers, Views, Fragments and so on, all compressed and inserted into one file, the preload file. If this file exists then UI5 will only load that file, and it wonāt load of all of the other various files which it ordinarily would have done. The error we saw earlier is caused because UI5 looks for a preload file early in the execution flow, but of course did not find one, and so carried on loading all of the files individually.
Now that weāve cleared up what the file actually is, and why that error appears, just why should exactly should we worry about it? After all weāve ignored that error up until now and all our apps have worked just fine. Well, the reason we should care is that it dramatically decreases page load time. This is due to the app only having to make one call to get the preload file, rather than all of the individual calls for each file, but also because in the preload file the code is āminifiedā, which means the file size is also very small relative to the full size individual files. This is especially important when developing UI5 applications which are to be used over a mobile data connection, where size has a very large impact on initial load performance. As an anecdotal example, on the simple UI5 app which I have just created a preload file for my initial load time went from 8-9 seconds down to 3-4 seconds, which is tremendous improvement!
Sounds great! So how can I make a preload file for my UI5 app?
For this next section you will need to have installed on your machine NodeJS, npm and Grunt. If you donāt know how to install and use these things then doĀ reach out to me on Twitter.
After you have all of the above installed, youāll need to create a package.json
file in your UI5 appās root directory. Open the file up and paste in the contents below and donāt forget to edit them accordingly:
{
"name": "barcode-test",
"version": "0.0.1",
"description": "",
"main": "index.html",
"author": "John Murray",
"license": "Apache License, Version 2.0",
"devDependencies": {
"grunt": "^0.4.5"
}
}
After creating and saving this file, install the Grunt OpenUI5 toolsĀ which are made by the UI5 team at SAP. To install these tools open a terminal session in your UI5 root directory and run this command ānpm install grunt-openui5 āsave-devā. This will download and install the tools, and also add them to the ādevDependenciesā section of your āpackage.jsonā file.
Next, again in the UI5 app root directory, create a file called āGruntfile.jsā. Into this file copy and paste the contents below, and weāll go through what it all means in a moment.
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
openui5_preload: {
component: {
options: {
resources: {
cwd: '',
prefix: '',
src: [
'webapp/**/*.js',
'webapp/**/*.fragment.html',
'webapp/**/*.fragment.json',
'webapp/**/*.fragment.xml',
'webapp/**/*.view.html',
'webapp/**/*.view.json',
'webapp/**/*.view.xml',
'webapp/**/*.properties'
]
},
dest: '',
compress: true
},
components: true
}
}
});
grunt.loadNpmTasks('grunt-openui5');
}
This is quite a simple example but it will suffice in most use cases. What we are doing here first of all is reading in the āpackage.jsonā file created earlier which provides the dependency list. Then we are setting the configuration options for āopenui5_preloadā which is the specific tool we are going to be using from the OpenUI5 toolset.
Component.js
file and therefore can leave it blank.Finally, we load the āgrunt-openui5ā²Ā toolkit from the plugin as previously installed and specified in āpackage.jsonā.
For the full documentation and parameter list Iād recommend looking at theĀ Grunt OpenUI5 toolsĀ GitHub page.
Thatās all the configuration set up, now itās time to generate our preload file! Fire up a terminal session in the same directory as your āGruntfile.jsā and run the following command āgrunt openui5_preloadā and you should see the following output along with a āComponent-preload.jsā file alongside your āComponent.jsā file.
Final thoughts
Congratulations, youāve just made your first preload file and are now well on the way to creating even better apps with UI5!
]]>I was browsing through the controls that were new with 1.28, using the OpenUI5 SDKās Explored appās filter-by-release feature, and came across the Message Page control.
What caught my eye was the text on the control. When you think about it, there arenāt that many controls that have default text on them.
Looking into how this would work in other locales (this control as you see it would only make immediate sense in English-speaking countries), and how the text was specified, led me down a path that ended up at a place that reminded meĀ of OpenUI5ās pedigree. Born inside of SAP, the enterprise scale thinking permeates throughout the toolkit, and is very visible in this context.
In the init function of MessagePage.js you can see that the controlās text property is being set to the value of the MESSAGE_PAGE_TEXT propertyĀ in the message resource bundle:
This MESSAGE_PAGE_TEXT property in the base resource file messagebundle.properties has the value āNo matching items found.ā:
Even if you know only a little about how resource models work, you may realise that thereās more to it than this. There are actually 39 different translated versions of this base resource representing many languages (more specifically locales) into which this control (and other controls) have been translated:
Letās have a look at a few (with the second grep Iām omitting those that have Unicode encodings, because theyāre hard to read):
And of course, not only does UI5āsĀ pedigree extend to just translations, right-to-left (RTL)Ā is also supported, out of the box.
LetāsĀ bring this post to a closeĀ withĀ a couple of examples. Donāt forget you can explicitly specify the language or locale with a special query parameter āsap-languageā in the URL.
Hereās Hebrew (āiwā), with RTL kicking in automatically:
And to finish, how about another language:
(See what I did there? :-)
]]>Learning your way around UI5 can be hard sometimes. With the new tutorials and improved structure in the developer guide, help on theĀ journeyĀ to UI5 masteryĀ hasĀ got betterĀ over the last few months.
But if you really want understand the UI5 magic in all its depth you might want to dig a little deeper. For my part I can truly recommend goingĀ back to the rootsĀ toĀ have a look into the UI5 base classes. They are properly lined up like a string of pearls building upon each other and forming the high level architectural blueprint of the toolkit as a whole.
All UI5 base classes come with a set of metadata, basically simple json that may hold additional information describing the instance. In addition this metadata has an underlying metadata implementation that provides helper functions, validation logic and some more convenience.
sap.ui.base.Object
This āinstance plus metadataā concept is introduced already with sap.ui.base.Object, the first in line and mostly everything you want to instantiate in UI5 will be inheriting from it. Its children are mostly workers like classes taking care of parsing, or basic data carrying objects like the event implementation.
sap.ui.base.EventProvider
While Object is only setting the stage, sap.ui.base.EventProvider is the first to actually have capabilities to share. And you might have guessed it from the name already: the Event Provider introducesĀ eventing in UI5. With functions to attach, detach and fire events, its toolkit is only small compared to what is still to come. Nevertheless, it is the starting point for most of the key features in UI5. Model, Binding, Router, at the heart they are all ājustā Event Providers.
sap.ui.base.ManagedObject
Next in line is a heavyweight champion when you compare it with its predecessor: the sap.ui.base.ManagedObject. It is the herald for all instances that later will be rendered as it introduces properties, aggregations and associations in the metadata. It will never be rendered, but it sets the stage and extends the metadata implementation adding getters and setters for the fields that are introduced. Moreover it allows for data binding and might even have its own model. The most prominent example is the Component.
sap.ui.core.Element
The first base class that might have a place in the DOM is the sap.ui.core.Element. It has to be said that the Element itself has normally no renderer on its own and therefore is not to be placed standalone into the DOM. But it is the class you want to use in aggregations of your own controls with the list item as one of its best known subclasses. It is the one that completes the metadata implementation for base classes.
sap.ui.core.Control
Last upĀ for this journeyĀ through UI5 architecture is the sap.ui.core.Control with children that are full blown UI elements. The last few thing that are still missing are introduced now. Besides direct DOM placement and the renderer it is basically picking up the last pieces with the busy-state and ability to handle browser events. And of course, every real UI-control has learned from Control.
sap.ui.core.Core
This post gives just the briefestĀ of UI5 architecture overviews,Ā covering only the bare essentials. There is much more to discover in that respect and I highly recommend you check out the entire package from GitHub and go exploring. There are definitely some gems hidden deep in the UI5 repository. Just one more for now, the sap.ui.core.Core, her majesty itself. And you might have guessed it, humble as she is, she downgraded herself recently and finally is nothing more than a (Base) Object.
]]>When creating applications, the experiences of the user should be one of the key considerations that drives build and development.Ā One aspect of this is the way that data is entered, saved and displayed to the user, which can drastically affect the usability of an application.
For this short post, weāre going to take a look at the Date Picker, which is anĀ input control in the OpenUI5 library used for simple capture of dates from the user.Ā As we all know, dates can be somewhat of a nuisance to work with, especially when entering on small screens with particular formats.Ā This control aims to ease this with a calendar style view of dates to select from.
Itās a simple, yet effective, little control that allows users to quickly select dates with a familiar and quick to use calendar style view.Ā The control is also configurable to display different date formats based upon the displayFormat
property, which can be useful when screen real estate is at a premium.
By using controls like the Date Picker with dedicated input mechanisms, we can all aim to make our applications easier to use within the day to day lives of users.
For additional controls focussed around date and time input, take a look at the Date Range Selection control when working with time periods, as well as the Date Time Input when making forms that handle dates and times together.
]]>Like many developers who find themselves building a lot with UI5, I find my working environment is mostly a local one, supplemented by activities in the cloud.
Local Environment
More precisely, while I often use the excellent SAP Web IDEĀ ā for training, generating starter projects and custom Fiori work, my main development workflow is based upon tools that are local to my workstation. In my particular case, thatās most often my MacBook Pro running OSX, but sometimes a Debian-based environment running in a chroot on my Chromebook, courtesy of the awesome crouton project. I use tools that work for me, that donāt get in the way of my flow, and at the bare essentials level, that means aĀ decentĀ editor (Vim or Atom), a local webserver (based on NodeJS), and a runtime platform that doubles as debugging, tracing and development (Chrome).
Cloud Environment
When Iām working in the cloud, specifically with the SAP Web IDE, the toolset is totally different. Not only that, but the bootstrapping of UI5 works slightly differently. In this short post, I wanted to explain what I do to flatten any speedbumps when transitioning between the two environments. The worst thing for me would be to have to alter my codebase slightly to take account of different runtime environments.
Different UI5 Versions
Locally, I maintain a variety of different UI5 versions, that Iāve picked up over the months and years. You never know when youāll need to go back to a previous version, or even look through the complete history, to see how something has changed. This is what the contents my local ~/ui5/ folder look like:
I use the NodeJS-based static_server.js Ā script to serve files from this folder, as well as another folder which contains my UI5 projects. From here, I can access different UI5 versions by changing the location that the UI5 bootstrap looks. (Note that while I can and do often access older versions, I pretty much always develop against the latest version, unless thereās a good reason not to ā¦ access to older versions is almost always for reference purposes.)
Usually I specify ālatestā in the URL, which refers to the symbolic link in the folder above, which (via the use of the small āsetlatestā script) in turn points to whatever folder represents the latest unpacked zip:
If I want to refer to an older version, I do so like this:
The same approach with the URL path applies to the contents of the āsrcā attribute in the UI5 bootstrap:
Harmonising Local and Cloud Bootrapping
However, this doesnāt play well with the SAP Web IDE, at least not directly. So Iāve come up with an approach that minimises the fuss and disruption when taking a UI5 app repo that Iāve developed locally, and cloning it for use in the SAP Web IDE on the HANA Cloud Platform (HCP) environment, or vice versa.
Letās look at an almost empty UI5 project folder that Iāve created locally:
In it, we have the index.html which contains a UI5 bootstrap that looks like this:
The āsrcā attribute refers to a resources folder in the same location as the containing index.html. The value of this attribute (āresources/sap-ui-core.jsā) is pretty much the de facto standard for āthe location of the UI5 runtimeā, so itās sensible to change this only if you have a very good reason, if not only because youāre starting a battle that you might not want to see through.
If you look at the folder listing above, youāll see that this resources folder is actually a symbolic link to the resources folder in the ālatestā UI5 version, as described earlier (yes, so we have a symbolic link following a symbolic link).Ā So weāre bootstrapping whatever the latest version of UI5 is.
Weāre not interested in having this in our UI5 application repo (it would be of no use in most other contexts) so in our .gitignore file, we exclude it:
When we want to run the application in the HCP context, via SAP Web IDE, we use a mapping file that translates our bootstrap āsrcā attribute URL into a resource that is available globally on HCP. This mapping file is neo-app.json, and here, it contains this:
The path āresourcesā is mapped to the target āsapui5ā³ service at ā/resourcesā. This means that the script element in our index.html can successfully resolve and bootstrap UI5 from the right place, with zero changes between my local environment and HCP.
With my āresourcesā symbolic link in place, along with the neo-app.json mapping, I can enjoyĀ a smooth transition between local and cloud based development when Iām working on UI5 development with the latest version. Itās a simple technique; get it in place, andĀ you could be looking at some happy productivity gains, without loss of any reference to older UI5 versions locally.
]]>Giving an end user good feedback regarding their interaction with the application or the applicationās interactions with the back end has always been a bit of a challengeĀ in UI5.Ā Until recently pretty much every developer had a different styleĀ of capturing and exposing messages, with many of us building our own message log solutions. This lost a level of the āEnterpriseā uniformity that is often required for our applications.
In recent releasesĀ however SAP and OpenUI5Ā have provided a very robust and uniformed way of exposing these messages.
Now users can expect message to be shown in a clear and concise way, that is the same across all UI5 applications; no more are we hacking around arrays to provide our own message logs. From the bright colours to the simple click through to view a more detailed message, everything about this control has been aimed at the user who expects clear interactions, even Web Dynpro Java (WDJ) handledĀ messages better than early UI5.
In the example below I have mocked up a couple of Buttons that trigger the Popover in its two āStatesā. I tend to lean towards the full Popover as its easy to see a full list of the most recent messages. However I can see good use cases, in mobile apps for example, where the condensed popover would be best. As shown in Day 3Ā of this series, on Semantic Pages, the footer bar makes a nice place for the Button that controls the Popover.
This control still isnāt perfect and there are a number of improvements I could see being added over the next few iterations but it is certainly a dramatic leap in the right direction for UI5 and user interaction. The ability to easily delete messagesĀ would be nice along with a way to prevent duplication on error.
A nice touch that can be done relatively simply is to alter the look and feel of the Button dependent upon the level of messages that have been posted. Those that have been following this series will have read DJās post on Expression BindingĀ from Day 2 of this series.Ā This would be a great way to derive the icon in the Button, to be based upon the āHighestā status message. Doing this gives the user feedback without them even opening the Message Popover and to me giving a user feedback at the earliest possible time is always going to give them the best experience.
]]>Writing your component based applications in UI5 you might be familiar with a long list of settings in your metadata section making you scroll down for hours before reaching the point where the first violin plays. This is not only annoying but in fact bad design as it means to mix static configuration in large amounts with actual code.
One way to solve this is the usage of a manifest file ā one central asset that holds your entire application configuration. The UI5 creators have drawn inspiration from the W3C manifest for a web application concept that is currently under investigation and create an UI5 flavored version of it. The app descriptor in UI5 is basically a JSONĀ file named manifest.json that is expected in the same folder your component lives in. All you need to do to get started is to add an attribute manifest with the value ājsonā to your component metadata.
Introduced in 1.28 in a basic version, with upcoming 1.30 it is even smarter. Beyond static configuration for packaging and deployment it even helps to save you some code, especially when it comes to model instantiation. The manifest itself is structured in namespaces of which we want to briefly look into sap.app and sap.ui5 for this case. More details and examples can be found in the 1.30 documentation preview.
sap.app:
Mostly app specific attributes can be found here. You can also get set for your data model and resource bundle here. One property called ādataSourcesā expects an object that holds the URLĀ to your service, the service type and some additional settings if needed. A full blown service configuration would look like this:
If you have more than one service you can simply add another object to this attribute. These can later be referenced by the given name. In addition we added the relative path to the i18n file here and will make use of this later as well.
sap.ui5:
This namespace is used for any configuration that can be used by the UI5 runtime directly. This counts for the routing configuration, but also for UI5 library dependencies and of course our case with the model instantiation.
For i18n it is pretty straightforward and a named i18n resource model (named āi18nā) will be created by the UI5 runtime. For the actual data model(s) you can alsoĀ specify a name or keep it blank for an unnamed data model like this. Just set the datasource specified earlier and UI5 will handle the rest. The created models will be at your command as early as in the init function of your component.
To conclude this is only a snapshot of one little feature that is built into the new UI5 manifest, but showcases pretty well how this new file will ease your development routines, help to clean up your components and limit repetitive lines of code.
]]>Explored, before its promotion.
OpenUI5, like its twin sibling SAPUI5, has a great SDK.
The SDK containsĀ plenty of example code snippets, especially in the Explored app. UpĀ until version 1.20 the Explored app wasĀ ājust anotherā app in the Demo Apps section, but after that it was (rightly) promoted to prominence at the top level of the SDK menu structure.
The latest addition to Explored is a set of code examples that accompany a great multi-step walkthrough of many of the features and practices of UI5 development. A number of things are changing in release 1.30, including the introduction of the application descriptor, and a new way of defining modules. This walkthrough covers these topics and many others too. Itās well worth a look.
One thing that immediately caught my eye was when I selected the appropriate Explored sample that corresponded to Step 30Ā of the walkthrough, describing theĀ Debugging ToolsĀ : the excellent UI5 Diagnostics Tool popped up out of nowhere!
(Iām a big fan of this tool; thereās so much information it offers, as a UI5 developer you canāt afford to ignore its help.)
I was curious as to how this automatic opening of the tool had been achieved, and a quick look at the appropriate webapp/Component.js asset in the sampleās code sectionĀ gave me the answer:
jQuery.sap.require("sap.ui.core.support.Support");
var oSupport = sap.ui.core.support.Support.getStub("APPLICATION"); oSupport.openSupportTool();
Nice!
]]>Whilst web apps are great, and suit the vast majority of situations perfectly, sometimes they just donāt quite cut the mustard. It is in these situations that we are presented with a difficult choice, do we take option A ā Sacrifice the features which are specific to native applications for the sake of sticking with UI5 and the benefits that web apps bring? Or do we go with option B ā Sacrifice UI5 and the web app benefits, instead going with native code, but then have access to all the features? Well, even in the not-so-distant past we would have to weigh up the pros and cons of each option and make our decision accordingly.
More recently, we were provided with an option C ā Use PhoneGapĀ to make our application like a web app, using UI5 and an assortment of plugins to achieve our ends. However, this option was not without its own challenges and problems; you had to install libraries for all platforms you wished to build for, thenĀ structure everything in a rather precise manner, and to top it all off you then had to battle with the rather clunky command line interface. This did of course improve over time and after you had your setup and work flow down to a tee, but it was never smooth sailing. Thankfully though, we now have an option D!
Option D isĀ PhoneGap Build, a service which takes everything that is great about standard PhoneGap and then removes everything that is bad about it, providing a fast and streamlined experience. This service is freely available, and will allow you to have a native version of your UI5 app up and running within minutes.
As an overview, you create your UI5 application as you would normally, except this time you also include a config.xml file in the root folder. It is this file which the Build service uses to create your application, you simply specify the location of your index.html file and reference any plugins you wish to use. You then zip up all of your code and upload it to their website, and it will automatically build a native app for Android, iOS and Windows Phone in a matter of minutes.
For more a more thorough getting started guide on all of this, I am currently writing an in depth blog series on my own website, the first part of which can be found here ā UI5 and PhoneGap Build: First Steps.
]]>(This book was a close companion in an earlier life.)
My degree in Latin and Greek is not entirely without foundation or reason, and it provides me with at least a small sense of origin when it comes to words. The 3rd declension nounĀ ĻįæĪ¼Ī±Ā conveys the idea of a mark, a sign, a token. It refers to āmeaningā, essentially, and the use in modern languages of the word semantic often implies an abstraction, a layer that confers or allows meaning to be defined or carried.
What has that got to do with UI5 reaching release 1.30? Well, take a look at the fledgling Semantic Page. Itās the root of a series of new controls that are perhaps set to encourage standardisation of Fiori UIs. The SAP Fiori Design Guidelines describe a rich set of controls, but more importantly they describe how those controls should be typically employed.
Floorplans such as the Split Screen Layout and the Full ScreenĀ are all fairly familiar to us. But consistency comes from attention to a more granular level of detail, and the UI designers are encouraged to place certain controls in standard places. A couple of examples: Action buttons belong in the bottom right (in the footer) of a page, whileĀ the new MessageĀ PopoverĀ from 1.28 belongs in the bottom left.
When SAP created Fiori application developer teams across the world to build out the Fiori apps that we see available today, it was almost inevitable that the different styles and approaches across teams and members would have resulted in a variety of structures, making it difficult to get the UX right, the UI consistent, and causingĀ maintenance headaches. So SAP created scaffolding (sap.ca.scfld), a set of mechanisms that abstracted away a lot of the common boilerplate stuff allowing the developers to focus on the application logic (and preventing them from reinventing the boilerplate, slightly differently, every time). But this scaffolding was a little bit too monolithic, and I think the plan has been to phase it out.
Iām also thinking that the alternative could involveĀ this set of semantic controls. Take a look at the way the Semantic Page Master-Detail sampleĀ puts things in the appropriate place ā at a semantically meaningful level of abstraction above the individual mechanics of a Page controlās aggregations, for example.
Itās similar in the Semantic Page Full Screen sample too. To get a feel for this level of abstraction, take a look at how the aggregations are filled ā nowhere in this XML view definition does it say where the semantic controls should go:
What we seem to have so far is a small hierarchy of Page based controls, that looks like this:
SemanticPage
|
+----------------------+
| |
MasterPage ShareMenuPage
|
+---------------+
| |
DetailPage FullscreenPage
And there are plenty of semantic controls too. It doesnāt replace the breadth of functionality that the scaffolding offered, but itās a start, and it feels more modular. A namespace to watch!
]]>The expression binding feature wasĀ introduced with version 1.28, and allows logic to be included directly in an embedded binding. Itās a very useful feature, but a double edged sword that should be wielded with care.
Before expression bindings, any embedded binding that required a condition to be checked, or a calculation to be made, or a reformatting to happen, needed a reference to a formatter function that would either be in a dedicated formatter module (common), or in the controller (less common).Ā When using XML views, for example, the Model-View-Controller philosophy remained strong, in that any imperative computation remained separate fromĀ the pure declarative UIĀ definitions.
But in practice you find yourself creating a lot of formatter functions. Yes, some of them could be probably be refactored, and if you had time, you could probably find that library of common formatter functions that youād been half building in your copious free time. Regardless, you end up with a lot of helper functions, small and large, that sometimes become a maintenance burden.
Enter expression bindings. If youāre prepared to add sugar and milk to your coffee, if youāre prepared to sacrifice the absolute purity of MVC for the sake of brevity, then expression bindings can be your friend.
Hereās an example:
The greeting is created in three different ways. First, we use a function inside a formatter. Then, we use the same function but in the controller that is linked to the view (note the dot prefix in the value of the formatter property, specifying that the function is to be found in the controller). Finally, we have the same example in an expression binding, directly in the view.
Those who have had their coffee already today (milk and sugar optional) may have noticed something unusual in the expression binding example. Instead of having the literal āGoodā outside of the embedded binding curly brackets,Ā like this:
<Input
enabled="false"
description="Expression"
value="Good {= ${/now}.getHours() > 11 ? 'afternoon' : 'morning'}" />
ā¦ itās like this, instead:
<Input enabled="false"
description="Expression"
value="{= 'Good ' + (${/now}.getHours() > 11 ? 'afternoon' : 'morning')}" />
(Note the extra parentheses in this version).
This is because, currently, any literal string outside of the curly braces is rejected by the runtime.
Anyway, expression bindings are here, and they may be the sort of thing that youāreĀ looking for. Possibly exactly what youāre looking for, if youāre consideringĀ XML Templating. But thatās a post for another time.
]]>The Message Strip is a nice new control with 1.30. Itās in the main (sap.m) library of controls, and for me, appeals because it bridges the gap between no message at all, and a modal dialog box which is sometimes too heavyweight.
(If youāre wondering about theĀ Message ToastĀ control, donāt forget that this lighter weight mechanismĀ should only be used for āless importantā messages such as informational messages on the successful completion of a step).
The nice thing about the way that this has been designed is actually itsĀ simple, perhaps restrictive nature. A nature that will give apps a better chance of having consistent messaging. The possible message types are defined in the core, and are displayed visually differently, via colour andĀ icons. Thereās an optional close button, and an optional link that is always displayed at the end of the message text. Pretty simple and neat.
And thatās about it, which in most cases, is all that will be needed, to display a useful short message in line within the application UI, especially in the context of desktop based UI designs. If you want to manage messages in a more complete way, you might want to take a look at the Message Popover. But donāt dismiss the new Message Strip, it may just be what youāre looking for.
]]>InĀ recent versions of the SDK youāll find a new section called āCoding Issues to Avoidā. Itās great to see this take shape and start to become formalised. Some of them are obvious, at least to some folk,Ā but itās always helpful to have a reference.
Letās have a look at a coupleĀ of the Doās and Donāts here.
The top itemĀ on my list is āDonāt use private and protected methods or properties of UI5ā³. Far too often, I see code that refers to internal properties of UI5 controls, especially to the arrays and maps that are managed internally (for the aggregations, for example). I think itās fair to say that 98% of the time, the use here is totally wrong, and thereās a public API to give you what you want.Ā There have been a couple of instances in the past where Iāve seen something for which there appeared no equivalent ālegalā alternative, but that could be down to API maturity, or lack of documentation.
Related to this item is almost the antithesis, which is to use (create) properties that inadvertently clobber properties of the same name in an existing context. A great example of this is within a controller definition. Thereās a nice pattern, which can be seen in many places including the reference apps in the SAP Web IDE, where in any given controller you would create controller properties to refer to the related view, and often the domain or view properties model, in the init event, like this:
sap.ui.controller("local.controller", {
_oView : null,
onInit : function() {
this._oView = this.getView();
},
onSomeEvent : function(oEvent) {
...
this._oView.someFunction(...);
...
}
});
But sometimes the developer, averse to underscores, will write it like this:
sap.ui.controller("local.controller", {
oView : null,
onInit : function() {
this.oView = this.getView();
},
onSomeEvent : function(oEvent) {
...
this.oView.someFunction(...);
...
}
});
What actually happens is that the call to
this.oView = this.getView();
is clobbering the internal property oView of āthisā (the controller), which is pointing at the view itās related to. Luckily what itās being clobbered with in this small (underscore-less) antipattern is another reference to theĀ view itself, so not much immediate harm done, but itās not entirely safe or future proof.
Another interesting best practice described in this section of the SDK relates to internationalisation (i18n). What one should do is to use placeholders (such as {0}) in more complete sentences in translateable resources. What one often finds is that application texts are fragmented into short phrases and built up with concatenation, along with variables.
The problem is that sentence structure varies across languages ā as described in the āDonātā example in this section, a typical example is where the verb goes.Ā Itās better to avoid programmatic text construction, and leave it to the translation experts. Go long, and go home.
Anyway, have a look at the rest of this JavaScript Code IssuesĀ section in the SDK, plus thereās aĀ CSS Styling Issues section too!
]]>UI5, the collective short name for both SAPUI5 and OpenUI5, is soon to reach a milestone, with the release of 1.30. Thereās already a preview release available.
The UI Development Toolkit for HTML5, to give it its proper long-formĀ Culture-style name, has come a long way in the last few years. Itās a multi-faceted tookit that shows pedigree, passion and influence from many directions. From the web dynpro inspired design roots, Ā through the hard work and commitment fromĀ all the great designers and developers, to the exemplary responsive controls we have come to know and love in the sap.m library and beyond.
And of course thereās the open sourcing of the toolkit, a great move on SAPās part, influenced not in a small wayĀ by many developers both external and internal to SAP. Many of the UI5 core team have open source in their blood, part of a new generation that is making SAP what it is today.
Where would SAP Fiori be without UI5? Nowhere. The engine behind the UX revolution that is powering todayās and tomorrowās SAP applications (with S/4HANA) is UI5.
As Norman Cook might say, āYouāve come a long way, babyā.
So as a bit of fun, and to celebrate this version 1.30 milestone,Ā hereās a series of 30 posts, one a day, on UI5 related topics. Small posts from me and some guest authors, designed to be read duringĀ a quick coffee break. Nothing earth shattering, but hopefully things that will whet your appetite for further reading, and perhaps bring to your attention features that you might not yet have had a chance to consider.
(This series is also available online, for the Kindle, at Amazon: 30 Days of UI5: Celebrating SAPUI5 and OpenUI5's milestone 1.30 release in Autumn 2015.)
The Series
Day 1 ā Welcome to 30 Days of UI5!Ā by DJ Adams (this post) Day 2 ā Expression BindingĀ by DJ Adams Day 3 ā Semantic PagesĀ by DJ Adams Day 4 ā Creating Native Applications with UI5Ā by John Murray Day 5 ā OpenUI5 WalkthroughĀ by DJ Adams Day 6 ā The App DescriptorĀ by Thilo Seidel Day 7 ā JavaScript Doās and Donāts for UI5Ā by DJ Adams Day 8 ā User Notifications with the Message PopoverĀ by Sean Campbell Day 9 ā Bootstrapping UI5 Locally and in the CloudĀ by DJ Adams Day 10 ā Handling Dates with the Date PickerĀ by James Hale Day 11 ā Lightweight Notifications with the Message StripĀ by DJ Adams Day 12 ā Base Classes in UI5Ā by Thilo Seidel Day 13 ā Multi-language support out of the box ā UI5's pedigreeĀ by DJ Adams Day 14 ā Speeding up your app with a Component preload fileĀ Ā by John Murray Day 15 ā The UI5 Support Tool ā Help Yourself!Ā by DJ Adams Day 16 ā UI5 and Coding StandardsĀ by DJ Adams Day 17 ā UI5 and Fiori ā The Story of Open and FreeĀ by John Appleby Day 18 ā MVC ā Model View Controller, Minimum Viable CodeĀ byĀ DJ Adams Day 19 ā A Short UI5 Debugging Journey byĀ DJ Adams Day 20 ā Fragments and Minimum Viable Code byĀ DJ Adams Day 21 ā Spreading the UI5 Message by DJ Adams Day 22 ā Merging Lists with UI5 by Chris Choy Day 23 ā Taming the Resource Model Files by Nathan Adams Day 24 ā An introduction to sap.ui.define by DJ Adams Day 25 ā The experimental Client operation mode by DJ Adams Day 26 ā UI5 ā looking back and forward by DJ Adams Day 27 āĀ A non-techie PMās view of UI5 by Jon Gregory Day 28 āĀ UI5 Version Info by DJ Adams Day 29 āĀ Revisiting the XML Model by DJ Adams Day 30 āĀ The origin of becoming a fundamental enabler for Fiori by Sam Yen
]]>If you need to get hold of me urgently (i.e. within hours), then give me a call. But be warned, I may not be able to answer immediately if Iām on a Pomodoro. Thanks!
(For more on email and work, see Things I do to make my work life better.)
]]>In the developer console, I was examining the data structure in the JSON Model that was set on the List, and did a double-take. Iād mistakenly generatedĀ a map rather than an array, as the value of the property to which I wanted to bind the items aggregation. Naturally, I thought, it needed to be an array, but I had spotted that it was a map ā the output of a nice little reduce function I was nicely proud of, with my functional JavaScript hat on (but thatās another story).
So I looked across to the app itself, expecting the List to be empty. But it wasnāt! It was showing exactly what I had expected to see, had the value of the property been an array. What was going on?!
After some digging, I found out. Introduced on 10 Dec 2014, within the 1.28.0 release, was a modest feature:
**[[FEATURE] sap.ui.model.json.JSONListBinding: iterate over maps](https://github.com/SAP/openui5/commit/38ab764601c061d5fbf256f8bb4703cd4ec89022)** Enhance JSONListBinding to iterate over maps (by key), not just over arrays (by index).Interesting! A small modification to the JSON List Binding to treat the indices of a map as if they were of an array. After all,Ā in JavaScript, arrays and maps are perhaps more closely related than one might think.I set about confirming what Iād found with a small test on Plunkr,Ā āAggregation Binding Testā:
But donāt take my word for it ā the author has also added a test to the JSON List Binding QUnit tests:
It makes sense to blur the distinction between maps and arrays when it comes to aggregation bindings; already I have a use for it, and I didnāt even know the feature had been implemented!
]]>Donāt get me wrong, the openSAP initiative is excellent, free learning materials of high quality? Yes please and thank you!Ā This instills a passion in me (and Iām sure many others) for (a) learning more and (b) trying to attain the highest achievement. In the case of openSAP, this means trying to attain high marks in the assignments.
Unfortunately, the question and answer sections of the weekly assignments sometimes get in the way of that, in that the questions and / or answers are ambiguous. The current openSAP course āBuild Your Own SAP Fiori App in the Cloudā, has great content but the questions are dubious. Here are a couple of examples, that weāre discussing on Twitter right now:
In the assignment for Week 2, there is the following question, with the 4 possible answers thus:
Within the context of SAP HANA Cloud Platform, where do applications run?
(a) In the HANA Database
(b) Inside the cockpit
(c) In an SAP HANA Cloud Platform account
(d) On the SCN community page of SAP HANA Cloud Platform
The officially correct answer has been marked as (c). But an account is not somewhere where code can be run. Itās not an execution environment. Itās an accounting, configuration, billing artifact. Itās the credentials, the units of computing allocated and allowed, itās the sets of permissions for access to features and subscriptions and so on. Itās not an execution environment. So thereās no way that anything canĀ runin the SAP HCPĀ account. The nearest correct answer as far as I could see is (a). But thatās not entirely accurate. However, the ambiguity of this question and the possible answers force me to choose āthe nearest that makes senseā which is (a), as (c) can certainlyĀ not be correct.
Another example is in the assignment for Week 3, where thereās the following question and 4 possible answers:
Which end-to-end application development phases are currently supported by SAP Web IDE?
(a) Prototyping, developing, testing, deploying, and extending
(b) Requirements management, prototyping, developing, testing, deploying, and extending
(c) Prototyping, developing, functionality testing, A\B testing, deploying, and extending
*(d) Developing, testing, deploying, and extendingĀ *
The officially correct answer has been marked as (d).
The official download materials for this week contain, as usual, a complete transcript of all the units, the slides, and the videos. This is great in itself. Unfortunately, the official transcript records exactly what the instructor said, which is (starting at 00:02:22, bold emphasis mine):
*And we do so by covering the end-to-end application development lifecycle with one tool.Ā And when we refer to the end-to-end application lifecycle development, we start from the prototyping of the application,Ā then the development, the testing on the different devices of course, the packaging and theĀ deployment into different application landscapeĀ and then later on after we released the application, also the extension of the application inĀ order to customize itĀ and make it suitable for the different scenarios and**customers. *
The slide related to this section looks like this:
See that tiny couple of words in a footnote in the bottom left? They say ā*future innovationā. The instructor didnāt mention this, so if you didnāt see the slide or were watching on your smartphone (which I was) where it was too small to see, but were nevertheless intensely listening to her, and then reading the transcript to double check the facts, you would not have noticed this.
Now call me old fashioned, but if the transcript says that prototyping is supported, then I take it that prototyping is supported. But I donāt just take the transcriptās word for it ā¦ I do prototyping in the SAP Web IDE. I donāt use the Powerpoint-based kit, I build simple views in XML either by hand in the coding editor, or sometimes with the layout editor. So practically speaking, the SAP Web IDE does support prototyping, regardless of what is or is not said.
The challenge is not the course itself, the content, as I said, is great. The challenge is setting clear questions with unambiguous answers. Here are two occasions (and there have been others, on other openSAP courses in the past) where this is not the case.
Iām passionate about learning and sharing knowledge, and being the best I can be. Something like this where incorrect answers are given as the officially correct answers, does make me somewhat sad.
But one thingās for certain: If youāre reading this and not participating in the course, head on over there right now and catch up with these great learning opportunities!
Now this is worth shouting about.Ā Around 3 hours after I took part in the discussions on Twitter this morning and published this post, the regular weekly āWelcome to Week Nā email arrived in my inbox as usual. But what was special was this section:
Weekly Assignments: Problematic Questions in Weeks 2 and 3*Week 2: Within the context of SAP HANA Cloud Platform, where do applications run?**Week 3: Which end-to-end application development phases are currently supported by SAP Web IDE?**In both these cases, we realized that the questions were slightly misleading. You can *find more information on the discussion forums for weeks 2 and 3. To ensure fairness toall our learners, we will assign full points for these questions to all learners who took the weekly assignments. Your scores will be adjusted at the end of the course.
This is the openSAP team directly and pretty much immediately addressing our concerns and worries, within a few hours.Ā I cannot commend the openSAP team enough for this. Not primarily for addressing the issue (issues arise in all manner of contexts, thatās normal), but for being ultra responsive and in touch with the participants of the course directly.
Other MOOCs, heck, other educational institutions in general, please take note. The openSAP team shows how itās done.
]]>Greetings! Itās time yet again to share a few newsworthy items that caught my eye this week in the world of Fiori. Letās get to it!
Ariba Total User Experience by Ariba We start out with something from earlier this month that just came to my attention via an article in SearchSAP ā āAriba unveils major overhaul of user interfaceā. At this monthās Ariba Live conference Ariba revealed their new āTotal User Experienceā approach toĀ improving the user experience for their products. And it comes as no great surprise to see that it is ā as SAP have been sayingĀ it would be āĀ aligned with the SAP Fiori UX approach. Hereās a tweet from Tridip Chakraborthy:
#AribaLIVE Boom Woot Woot ! introducing the @ariba #mobile app #catalog ##SAPFiori user interface paradigm shift #UX pic.twitter.com/8XAZRzMkBJ
ā Tridip Chakraborthy (@tridipchakra) April 9, 2015
You can clearly see the huge similarities in UX design and approach even from this one photo. The SearchSAP article states that āthe Ariba UI does not share code with Fiori, but uses the same stylesheets, giving it a similar look and feelā. In a post based on my keynoteĀ at Mastering SAP Technologies conference earlier this year, titled āCan I build a Fiori app? Yes you can!ā, Iād written:
If you think about it, that abstraction, that distinction between philosophy and practicality, is the one way SAP can continue to forge ahead with some sort of (eventually) unifying user experience strategy while at the same time dealing with the reality of products from differing sources, with differing frontends ā Concur, Ariba, Lumira, and more.
That abstraction is clearly in evidence here. Iād be really interested to see more details of how Aribaās SAP Fiori UXĀ āTotal User Experienceā looks under the hood, to discover how it ticks. It certainly looks great on the surface!
SAP Fiori Practitioners Forum by Katie Moser Katie announced this back in January but Iāve only recently joined and Iām looking forward to getting involved and sharing best pratices with the other members. According to the post, this monthly forum is ādesigned to help you drive the successful deployment of SAP Fiori in your organisationā.
I understand that the sessions so far have been very useful. As we have all discovered already, Fiori is a multi faceted thing, and a place to discuss practicalities from design & configuration through rolloutĀ and beyond, with like minded individuals is a great idea. (Note that itās sensibly only open to those that have installed Fiori).
SAP Fiori Theme for Kendo UI by Telerik
Well not only do we have Ariba nowĀ embracingĀ Fiori, but also aĀ JavaScript UI framework by the name of Kendo UI. This framework is jQuery based, with AngularJS integration and support for Bootstrap and more. Unlike OpenUI5, which is the version of SAPUI5 (that powers SAP Fiori UX) that SAP open sourced, Kendo UI is software that comes in the form of a 30-day free trial, with a purchase required after that.
I watched the short video demo and itās an interesting prospect. Itās not exactly the same, but pretty close.Ā If youāre like me,Ā one who has pored over the controls in UI5 for a long time, things are notĀ quite the same, although from a distance you could almost be forgiven for mistaking it for āthe real thingā (how that is defined is another story).
Itās worth bearing in mind that no amount of styling of controls will make an app intoĀ a Fiori app; while the styling is incredibly important and goes along way to helping the developer build Fiori apps, itās just one pillar that supports the whole Fiori UX approach. The other pillars are responsiveness, design patterns and the other constraints and that are well described in the SAP Fiori Design Guidelines.
Well thatās just about it for this week. Until next time, share and enjoy!
]]>Well hello there folks. This week sees the start (for me) of a week off on holiday, but not before I put out this latest episode of TWIF for a quick roundup of things that caught my eye in the world of Fiori. If you have any stories to share, let me know!
SAP Fiori Application Development in the Cloud by Monika Kaiser & Karl Kessler The subtitle of this article in SAPinsider magazine isĀ āBuilding, Deploying and Mobilizing Applications for Todayās Enterprisesā. And as a great introduction, it certainly delivers on that. Not surprising given Karlās pedigree in knowing about and writing about SAP technologies :-)
This is very much a getting started article, but where it scores is in the detailed and annotated set of screenshots that are useful for introducing folks to the whole process of building a Fiori app. Not generally, but specifically using SAPās HANA Cloud tools, including the SAP HANA Cloud Platform, the SAP Web IDE and SAP Mobile Secure.
The article does remind me of the conversation I have with many developers at customers and partners as well as with individuals. It usually starts like this: āQ: Should I use SAP Web IDE as my main editor?ā closely followed by āA: Well, it depends ā¦ā. Thereās a mentality, or a mindset, amongst SAP developers that is hard to shake, because of decades of the same experience.
As ABAP developers, weāve been used to having to use SE38, SE80, SE24 and the like. Having the tool question pre-answered for us. And many of us have waited on SAPās every word, even in the dark days when Eclipse was recommended as the development platform. Now we have a choice, but many are looking to SAP for recommendations. And it makes some sense ā SAP need to invest in building tools for the army of SAP programmers out there for many reasons. With the SAP Web IDE,Ā theyāve landed with both feet on the ground, in that itās not unpleasant to use and it comes with great productivity features that Just Work(tm). Whatās more,Ā no-one is saying that SAP Web IDE should be yourĀ only editor.
Yes, I know that SAP Web IDE is based upon Orion, but youāre not going to convince me that itās the same thing. I use SAP Web IDE to start some projects off; Iāve even dabbled with the great plugin and templating system (see āSAP Fiori Rapid Prototyping: SAP Web IDE and Google Docsā), and the test offline version (see āSAP Web IDE Local Install ā Up and Runningā). But I donāt religiously stay with that as my main development environment ā¦ for that, I prefer a combination of a local NodeJS based server and the Atom editor right now. Mostly because a lot of the time Iām developing, Iām on the move, with little or no Internet access.
Today weāre in a very nice situation where there are tools from SAP available, and we can choose to use them as much or as little as we see fit. For me thatās a great improvement on earlier periods. Take a look at this article if you havenāt seen the SAP Web IDE yet, and you can make your own mind up.
SAP Web IDE: The Simple Way to Build and Extend SAPUI5 Applications by Yaad Oren While weāre on the subject of the SAP Web IDE, hereās an opportunity to learn more about itĀ specifically from one of the many great folks involved in its development and nurturing.
Itās an hour long video, and includes a presentation from an SAP Web IDE user, PepsiCo.
(I wish SAP would make these videos available on YouTube too ā I manage 95% of my viewing activities there, with playlists and āwatch laterā, and can sit down in front of the TV to catch up. Please, SAP?)
User Experience sessions at SAPPHIRE NOW 2015 by Peter Spielvogel I donāt normally talk much about Sapphire Now, Iām much more interested in SAPās main annual event ā SAP TechEd && d-code :-) But of course, without the business, SAP, primarily a software and platform company that just happens to write business applications, would struggle to survive.
Yes, of course that was a troll, but I make no apologies for saying it. With huge emphasis on the User Experience (UX) you can expect plenty of sessions covering this topic and related topics too. The subtitle to Peterās blog post is āSAP Screen Personas, Fiori UX, Design Servicesā. As you can imagine, being a conference focused on the business rather than the technology, on the surface rather than on the mechanics underneath the surface, youāre not going to find much in the way of the toolkit that powers Fiori ā UI5. There are a total of 8 sessions that I could find, via the agenda builder, that mentioned SAPUI5. But thatās sort of the point. Much more important are the myriad sessions that Peter lists in his post, covering personalised user experiences with S/4HANA, SAP Screen Personas, SAP Fiori LaunchPad and more.
The UX topic is wide and varied, and while I will continue to loosely categorise SAP Fiori as a strategic approach and SAP Screen Personas as a tactical approach to UX, the fact is that with the LaunchPad becoming the new portal, and with businesses wanting access to more than what the currentĀ collection of SAP Fiori apps covers, there will be, for a long time, a hybrid solution to the overall user access and user experience to business data and processes.
Whatās important is that we understand where SAP Screen Personas fits in, and with HTML5-based version 3 of the productĀ (withĀ JavaScript scripting support and more), just around the corner for all comers, we can easily imagine a cross-technology approach to all the tools required for a business user to carry out their responsibilities. With judicious use of theming and styling, we could move one step closer to thatĀ nirvana of a unified UX.
]]>FIORI Notes 1 : One UX to Rule them All by Wilbert Sison This week saw a simple post by Wilbert summarising a few of the key places to visit on oneās journey to Fiori enlightenment: The Fiori Cloud Edition Trial, the Fiori Apps LibraryĀ and the UI5 Explored app within the SAPUI5 SDK (the more I ponder the name and the purpose and what itās becoming, perhaps we should rename it from Explored to Explorer). What caught my eye with this post is that it was published in the ABAP Development section of the SAP Community Network, and it also gave rise to a short discussion on UI access to HANA.
First, the place the post was published. Fiori, and by direct inference UI5, is a cornerstone technology for SAPās product landscape. What this means in practical terms is that we as SAP technicians need to embrace UI5 as much as we embraced dynpro technologies in the past. Itās that big. Having given a 3 day course on Fiori, UI5 and Gateway/OData last week, with my co-presenter Lindsay Stanger, to a collection of Web and ABAP developers (their own self-descriptions), itās worth re-iterating the reality for many of us out there; many of us so-called ABAP developers. For me, the concept of an āABAP developerā is somewhere between āmeaninglessā and āunneccessarily restrictingā. Yes, there are developers out there that call themselves ā
Then, thereās the question of UI, that came up in the comments to WilbertāsĀ post. It reminded me of a great Twitter thread initiated by John MoyĀ where the frontend future for S/4 was discussed. Iāll leave it to you to enjoy reading that thread, but the takeaway for me was that people do understand that while wall-to-wall Fiori might be the vision, the reality will be different, particularly in the transition period while the Fiori app suites are constructed and made available. And for those of you pondering the earlier point about ABAP, and this one where SAPGUI and therefore dynpro is not going to disappear any time soon, think of COBOL again ;-)
April New App Distribution via SAP Fiori Apps Library The SAP Fiori Apps Library is lots of things rolled into one. Itās a nice talking point and focus for the Fiori pundits, an example of a publically accessible Fiori AppĀ (where, being Web native, the frontend source code is available for perusing and learning from), and a good source of information on current Fiori apps. And I donāt mean just human readable information, but machine readable data too. Iād exhorted SAP back in August last yearĀ (in TWIF episode 2014-35)Ā to make the data available, to supply āa machine readable datasetā. And that they have done, as of course the backend data source to the SAP Fiori Apps Library tool.
This of course is an OData source, from a HANA backend, and rich in information. Not only is it useful for powering the Fiori Apps Library app itself, but also for our own data-based analysis. You might have seen my post from earlier this year,Ā where I showed you how to pull data from this very OData source into a spreadsheet:
Fiori App Data into a Spreadsheet? Challenge Accepted!
Thing is, while this data is valuable in and of itself, if you add a further dimension, time, it becomes perhaps even more valuable. What are the apps that are appearing over time, over the different waves? Are there any that are disappearing? Current total app count as of today is 541. Last month (an unscientifically and deliberately vague point in time, for now) it was 495. So thatās 46 new apps that have appeared (none disappeared, I also checked).
I think it might be a worthwhile exercise to pull this app data on a regular basis, for comparisons over time. So as a starter, I have an experimental spreadsheet, Fiori Apps Data, with two snapshots, March and early April. Iāve added a few analysis tabs and one of the products is this breakdown of new apps by area, that Iāve titled āNew Apps Distributionā.
Do you think this is useful? What other information can we work out with this new time dimension? How often do you think we should or could take a snapshot? Weekly? Daily? Could this be a community curated data set?
Answers on a postcard (or in the comments) please!
]]>Build Your Own SAP Fiori App in the Cloud by openSAP This week saw the start of the new free course at openSAP, which, according to the description, is all about ābuilding your own SAP Fiori app thatās just as delightful and user-friendly as any of the hundreds SAP has built directlyā.
This is great news, especially for those of us who had signed up to the earlier course āIntroduction to SAP Fiori UXā but had been rather disappointed that it had had nothing much to do with Fiori UX, and more to do with deployment and setup. I wrote about this in TWIF episode 2014-40. A number of us did have a dialogue with the openSAP folks at the time, and Iām delighted to see our comments were taken on board ā this new course looks to be what we have been waiting for.
So weāre into Week 1 of this new nine week course, and already in the last unit of Week 1 ā Unit 5, Introduction to SAPUI5 and OData ā weāre seeing JSON and XML on the slides, HTTP headers, and even a small glimpse at the superb UI5 toolkit, including a tiny controller and an XML View definition. This is more like it! Technical details on the slides.
Donāt get too excited, however. I spotted some errors in this unit that arenāt trivial. Iāve built courses before and I know how hard it is to get things consistent, but one thing you must do is be accurate. Here are some of the things I spotted:
āOData ā¦ is using SOAP and REST to communicate between systemsā
OK, so first, REST isnāt a protocol, itās an architectural style, so it is difficult to use a style to communicate between systems. But that is sort of forgivable, in that perhaps more accurately one could say that the OData protocol has RESTful tendencies. But SOAP? No. OData has nothing to do with SOAP, in fact, one could say that the OData protocol is orthogonal to SOAP.
āOne of the most important libraries we have today is sap.ui.mā
Iām guessing thatās just a typo that found its way up through the layers to the actual presentation script. Because while there are libraries with the sap.ui prefix, there is no sap.ui.m. What the instructor is referring to is sap.m. The m originally stood for āmobileā, but now stands for āmainā. The sap.m library is one of the main collectionsĀ of responsive controls which are used to build Fiori apps. For more info, you might want to read āM is for āresponsiveāā.
āWe have a library [sap.ui.table] for table, and that provides me with the ability toĀ create a table that isĀ very rich in data but also responsiveā
For responsive tables, you probably want to look at the sap.m.Table control, rather than the sap.ui.table library, as the former is designed from the ground up to be responsive, whereas the latter is more for desktop apps.
MVC ā View <-> Model data binding
In slide 13, thereās a classic MVC style diagram, but the data binding relationship between the view and the model seems to be shown as one way only:
One of the many features of the powerful model mechanism and the data binding therein is that you can have two way binding. So Iād have drawn that arrow pointing both ways.
XML View definition
Being a stickler for accuracy (perhaps to the point of pedantry, of which Iām proud, not apologetic :-), this XML View definition on slide 14 is not quite accurate:
The View is within the sap.ui.core.mvc namespace, not the sap.ui.core namespace, so the root element here should reflect that, like this:
<mvc:View xmlns:mvc=āsap.ui.core.mvcā
Router? Bueller?
So if Iām going all out, I might as well mention that one thing that I think slide 16 could have benefitted from is mention of the Router in the architecture overview diagram. I do appreciate that these slides may have come from a time before the Router concept was properly established, but the Router is an incredibly important part of any Fiori app, so it would have really helped to see it here.
That said, now you know, you can go and find out more about it! :-)
Donāt get me wrong, Iām very excited about this course, and these issues can be ironed out now theyāve been surfaced. Iām looking forward very much to Week 2.
Fiori Breakfast Event by Brenton OāCallaghan, Lindsay Stanger and me On Tuesdsay morning this week in London, Brenton, Lindsay and I, along with other great Bluefin folks, ran a breakfast eventĀ all about Fiori. It was a really successful gathering, with business and technical attendees from SAP customer companies who were already, or were about to, or were just interested in embarking upon their Fiori journey. We had a special guest from one of our clients too, and to be honest, she stole the show :-)
It was clear from the event that people are realising that Fiori is not only here, itās here to stay, and itās a journey that is not just about new apps, but about a new SAP. If youāre reading this TWIF column, you already know that. Itās a genuinely exciting time for us as customers, partners and consultants, not only because of the UX aspect, but also because the present and future that is Fiori is based upon open technology standards that are right. SAP has grasped the nettle of user experience, and embraced the right tools and technologies. Good work!
Well that was rather a longer post than usual, so in the interests of keeping this to something you can read in a coffee break, Iāll leave it here, and wish you well. Until next time!
]]>Greetings! Last week saw the return of the This Week in Fiori series, with a video from me andĀ Brenton. More on that video shortly. Before last week, the previous episode had been in October last year. So much has happened in the Fiori world that it would be crazyĀ to try and cover it all. Instead, over the next week or two, Iāll pick out some items that stand out.
So letās get started with some picks for this week.
Filtering Fiori Apps by Release by Gregor Brett In last weekās video, we looked at the Fiori Apps Library app and found that it wasnāt easy to identify the latest apps. I mentioned that while the Fiori Apps Library app itself didnāt expose the information in that way, the data was actually available, and laid down a challenge for anyone to make the app do just that.
Just a few days later the first responseĀ appeared āĀ Gregor Brett came up with a nice solution, which was to patch the running Fiori Apps Library app, adding a new View Settings Filter Item to the filterItems aggregation of the actual View Settings Dialog used in the app. The itemsĀ within that newĀ View Settings Filter Item were bound to a data collection that was already being exposed by the backend in the OData service, namely the Releases_EV collection, which gave information on Fiori Wave numbers and dates.
Bingo! Nice work Gregor.
The Fiori Community by the SAP Community Network Since the last episode of TWIF last year in October, SAP have created a new community within the SAP Community Network for Fiori. Thereās already a community for SAPUI5, but now thereās a specific community for Fiori. I spoke about this in my keynote at Mastering SAP Technologies last month, and itās an interesting and important distinction that SAP are making.
If you think about it, Fiori as an umbrella term is gigantic. It could be seen as a lot of things to a lot of people. Separating out the technical underpinnings (UI5) from other aspects (Fiori application configuration, extension and maintenance, UX design, deployment and platform subjects, and more) was only going to be a matter of time, if only to make the subjects more manageable.
But also remember that future Fiori offerings from SAP may not be powered by UI5. Of course, all of the Fiori offerings now and in the near future are, including all the S/4HANA applications such as the SFIN set, but when you consider SAPās purchases āĀ Ariba, ConcurĀ andĀ SuccessFactors to name but three ā a unified UX strategy is not going to happen from re-engineering the whole UI/UX layer of those (previously) third party products.
Visit the new SAP Fiori community and have a look around. It looks like itās here to stay :-)
Planning the Fiori ABAP Frontend Server (FES) ā Architecture Questions by Jochen Saterdag Getting your Fiori apps served to the frontend involves making the following things available: the OData services, the Fiori Launchpad, the Fiori app code (views, controller logic, and so on) and of course the UI5 runtime. SAP has been slowly but surely socialising the term āfrontend serverā to refer to a system that fulfils this role. I first heard the termĀ from SAP Labs folks in Israel back in 2013 (see āAn Amazing 36 Hours at SAP Labs Israelā), and itās becoming more pervasive these days. In modern parlance, perhaps, itās now properly ābecome a thingā.
Of course, there are always considerations when planning such a server, and Jochen does a good job with this overview blog post. He answers some important questions, including whether you should use an existing PI system as the base for such a frontend server ā¦Ā the answer, clearly, is ānoā.
10 tips to get you started on your Fiori development journey by me Well, whatās the point of having your own blog post series if you canāt talk about your own content now and again? ;-) As I mentioned earlier, I spoke at the great Mastering SAP Technologies Conference in Feb this year. I wrote up my keynote into two blog posts, the second of which was a ātop tenā style list. Iām sure there are many of you looking to embark upon this journey, so I thought Iād put together tips on what worked for me. If youāre interested in the first of the two posts, itās āCan I build a Fiori app? Yes you can!ā.
Well thatās about it for this week. See you next time!
]]>I recently wrote about trying to become more effective and efficent in āThe Makerās Schedule, Restraint and Flowā and in that post I referred to a video of a great talk by Scott Hanselman, in which he talksĀ (in the āConserve Your Keystrokesā section) about preventing information getting lost and having less value, by being trapped in emails when it could be shared and repurposed.
So when I find myself replying to an email, and writing more than a few sentences, Iām trying instead to store that reply, that infomation, in a place where it will live longer, and have the chance to help folks other beyondĀ the original email addressee. And after storing that information, I just send a link to that place instead.
Thanks for reading!
]]>Both the managerās schedule and the makerās schedule are important, but resonate differently and donāt mix. When making, building, creating things, solving problems, interruptions are disastrous, for all the reasons that Paul explains.
On the other side, time management, the proper organisation of tasks and working out what work to do, and how, doesnāt come for free. Managers and makers alike need skills in these areas. In order to build these skills, each one of us needs to understand that the areas actually exist, first of all. Email, phone calls, interruptions, the almost endless todo list and prioritisation issues are all things that we need to manage. And I recognise that I need to manage those things better. I use the Pomodoro Technique on occasion, but thatās just one tool. I also need to learn restraint. I need to resist the temptation to say āyesā, and to allow myself to be interrupted. If I get it right, I will find myself in flow more often. And thatās the mode that makersĀ āĀ developers, in our context ā work.
Since that original article on the Makerās Schedule, Iāve come across many other great articles and videos, and I wanted to share a few of them with you here, as you may find them useful too.
Remember ā saying ānoā, creating situations where youāre less able to be interrupted, usingĀ task and time management techniques that work for you, that let you produce more (or less, but thatās the subject for another post), is whatĀ we should be doing. Donāt fall into the trap of thinking that just because your project manager thinks and works in 1 hour chunks of time, you need to do as well. Of course, real life has a habit of getting in the way, but donāt let that stop us trying to be our best.
Further viewing & reading:
Scott Hanselman: Itās Not What You Read, Itās What You Ignore
Johnny Wu: Developer Productivity ā The Art Of Saying āNoā
Inbox Pause (great as an idea as well as this implementation)
]]>Last year I started the āThis Week In Fioriā (TWIF) series looking at news, events and articles in the Fiori world. The last post (2014-43) was in October 2014, written by Brenton OāCallaghan.
The Fiori world is growing and spinning even faster, and Brenton and I decided it was time to pick up where we left off. To get the ball rolling, we recorded a half-hour session at the end of this week, looking at some news in the Fiori world. This time we took a more technical flavour, remembering that Fiori is UX, but built ultimately built with UI (see āCan I Build A Fiori App? Yes You Can!ā for more on Fiori UX vs UI) ā there are always two sides to any single coin.
If you have any news, or any suggestions for future TWIF episode topics, just let us know!
Hereās this weekās episode. Thanks Brenton!
Share & enjoy!
]]>It would be an odd situation indeed for a unified consensus on any software, let alone software in this particular context ā HTML5 development toolkits and frameworks, where, if you donāt have an opinion, youāre looked upon as an outsider. So I wanted to state before I start that there is no single correct answer, or even a single toolkit to rule them all, and Greg makes some important points.
I thought IādĀ lookĀ at the individual points that Greg made.
āProprietary framework, no thanks.ā
As a lot of folks already know, UI5 is far from proprietary. It is written and maintained by web developers that work for an enterprise software behemoth, but the key difference is that UI5 has been open sourced, as well as using many open source libraries itself. In the article thereās the contrast made between āproprietaryā and āindustry standardā as though theyāre opposites. This is not the case. So Iām not sure whether the criticism being levelled at UI5 is about its proprietary nature (which is not the case) or about (not) being an industry standard. This latter point is debatable: A toolkit powering frontend software across the entire ERP landscape for SAP customers feels like a de facto industry standard to me. Yes, not every company has adopted Fiori, but for one that drives its business on SAP products, UI5 is a likely software component.
Iām curious about the āSAP quirksā phrase which is also mentioned in this point. Iām not sure which quirks are being referred to, but if industrial strength design, MVC, internationalisation, automatic support for RTL languages, client and server side model support and an accomplished data binding system are SAP quirks, then yes please!
Further, AngularJS is mentioned as a framework with a huge community behind it. From what I can see, that community is fracturing, due to the major upheaval in (re)design between the 1.x and 2.x versions. Thatās not to say that this couldnāt happen to UI5, but itās actually happening right now with that framework.
āSAP Backend Upgrade?ā
To do UI5-based apps āproperlyā, or āthe SAP wayā, then this is true; if you donāt already have a Gateway system in your ABAP stack landscape, then youāll need one and also the UI2 add-in with which the UI5 runtime is supplied.
In my experience, however, itās increasingly less common for an enterprise to not have a Gateway system somewhere; and with NetWeaver 7.40 you get the components built in as standard anyway. Further, installing Gateway components is often a coffee time activity.
But not wanting to over-trivialise this important original point, I wanted to point out the alternative; an alternative that is the most likely scenario anyway for a non-UI5 deployment such as AngularJS: a separate web server. You can just as easily host and serve your UI5 based applications, along with the UI5 runtime, from a web server of your choice. Then accessing the backend becomes the same task as if youād chosen a different (non-UI5) framework.
And on the subject of accessing the backend, the point that was made about āremote enabled functionsā does intrigue me. One of the advantages of UI5 is that it supports OData, an open standard, by the way, and one of the advantages of OData in turn is that it is a server-side model.
Calling remote function modules in this day and age is certainly possible and sometimes the only choice, but youāre not going to take advantage of server-side heavy lifting when it comes to data integration with your frontend. Iāve built Web-based apps with SAP remote function calls since the 90s, so I have the scars :-) Not only that, but the data abstraction model presented by the RFC approach is somewhat orthogonal to modern web based app data mechanisms.
āBrowser Supportā
This is of course always an interesting issue, but as an individual developer, and as a member of a development team, I prefer a solid statement about a well defined set of modern browsers which are supported by the toolkit I use, rather than have to do that job myself and deal with the vagaries that present themselves on a daily basis. Of course, rolling your own gives more flexibility, but itās often more work.
And at least for the clients that I work at, the fact that (a) the browser choice is usually somewhat controlled anyway, and (b) the fact that in the BYOD context people even choose (choose!) to bring Windows phones, which are supported by UI5, underlines that choice for me.
āFrontend Developers Donāt Careā
At the risk of appearing obtuse, Iām going to absolutely disagree with this statement :-) Frontend developers do care; they care about the quality of the software they work with, about how and whether the toolkit they use does the job without getting in the way. Of course, this caring, this obsessive compulsion to be using the right framework and doing the right thing may mean that for some developers the choice is something other than UI5.
And that would be fine. There is no one piece of software that fits all requirements or circumstances, in any context. In the past I have used jQueryUI, JQTouch, AngularJS and other frameworks. And I would never rule them out for future projects. But right now, Iām investing time and effort in UI5, because itās open source, itās enterprise ready, itās been designed & built and is maintained by committed, passionate designers and developers just like you and me (well, a lot more competent than me) and it is also fully in tune with SAPās technology directions.
Skills in UI5 are going to be useful not only for building out the current and next generation of proper outside-in apps, but also for supporting the deployments, customisations and extensions for Fiori. A nice side effect at which one should not sniff.
]]>In Feb 2015 The Eventful Group ran a great conference in Johannesburg - Mastering SAP Technologies. I was honoured to have been invited as a speaker, and I gave a keynote on the first day; the keynote was one of three items I was contributing to the agenda.
The title of this piece is in two parts. If we're not careful it could be a very short piece, because I've already given you the answer in the second part - "Yes!". What else do you need to know?
Well, let's start with some assumptions. I'm going to assume that, at least to a greater or lesser extent, you're possibly a developer, or at least are of a technical nature ... otherwise, you may want to stop reading now ;-). And that you're wondering about Fiori. What it is, how it works, what the component parts are, and how you put a Fiori app together.
You might be faced with the exciting yet terrifying prospect of building one from scratch; you might be more in the game of modifying and extending existing standard SAP Fiori apps. And you'd be in a good place; Fiori is a huge part, some might argue the single most important part, of SAP's frontend future.
In order to work out why the answer to the question is "yes", let's back up a bit and start with a few definitions. Let's have a look at what Fiori means, what it represents.
It's a philosophy. It's a novel approach to work where the focus is not on a thousand features, the focus is on a particular undertaking that a business person, wearing a particular hat, needs to complete. It's about moving from a transaction oriented view of work to a role and task oriented view. Perhaps you've seen the 1-1-3 concept in early Fiori documentation - one user, one use case, three screens.
It's user experience. UX, as the hip designer kids say these days. This is pretty closely related to the 1-1-3 concept. Three screens. What do those screens look like? It's not about the colours, but it is about what a user sees, and perhaps just importantly what a user doesn't see. It's also about how a user navigates through the task at hand, and also how they become familiar with visual paradigms so that when they move from one task, say, approving a purchase order, to another, such as managing a product, things are familiar, and they know what to expect.
It's cross platform. And that means written for the One True Platform, i.e. the web. Web native. So it runs on different devices, with varying screen sizes. Desktops, tablets, smartphones. Even Windows phones! If that's not cross platform, then I don't know what is.
So I've got this far and our conclusion must be that Fiori is actually a state of mind.
There are these vague but well-meaning notions that describe pretty well the "how" and the "why" but what we haven't really covered is the "what".
But that's partly the point. I used the phrase UX, and specifically UX. Not UI. There's a distinct difference between the general notions of user experience, and how that user experience is realised. At some stage, in every computing context, you're going to have to come down to bare metal.
And in our case, that bare metal is at the UI layer. There's also the data layer, don't worry, I haven't forgotten about that. But let's just concentrate on the frontend for now.
Have you noticed the subtle distinctions that SAP are making with regards to Fiori UX and UI? I outlined that distinction in a blog post around this time last year: The essentials: SAPUI5, OpenUI5 and Fiori. Now SAP are underlining that distinction by creating a brand new community in the SAP Community Network, specifically for Fiori. There's already a community for UI5, but now there's a separate community for Fiori. And that's sort of the the point I'm trying to make.
Before we get down to UI5, let's just consider this abstraction we know and already have started to love, called Fiori. It could be realised with all sorts of different technologies. If you think about it, that abstraction, that distinction between philosophy and practicality, is the one way SAP can continue to forge ahead with some sort of (eventually) unifying user experience strategy while at the same time dealing with the reality of products from differing sources, with differing frontends - Concur, Ariba, Lumira, and more.
Don't hold your breath, they haven't even managed to get login working properly and cleanly on their service portal even after more than a decade ;-) But the thought and the focus and the intention is very much there.
So Fiori is technology agnostic, and deliberately so. But at some point you're going to want to actually build something, so let's start to descend through the clouds down to reality.
We know the runtime platform for Fiori is the Web. That means HTML5.
HTML, CSS, JavaScript. But cross platform at this level only tells us half the story. Where's the data coming from? An SAP backend system. You could say it's "cross backend" too. ABAP and HANA stacks are the source for the business data and functions that power Fiori apps, made available via a unifying layer, which we'll look at shortly.
So, let's get to to it.
Right now, practically speaking, to build a Fiori app, you need three things: UI5, OData, and Nothing Else. (with sincere apologies to the late, great, Douglas Adams).
Let's start with UI5. UI5 is a toolkit for building client-side apps that run in the browser. There are of course other libraries, toolkits and frameworks out there that are in the same space, but this one is special. This one is from SAP, so it's industrial strength, enterprise ready, full of features, and a large part of it has been designed and built from the ground up for Fiori. What sort of features is it full of?
Well, full support for Model-View-Controller, for a start. Fiori apps can be complex beasts, and adopting an MVC approach to your code design is almost a must, if you want to survive with your hair intact.
And then there's a very accomplished data model mechanism for client and server side models, with a rather powerful binding system.
Need to write apps that work in different languages, some of them right-to-left? Got that covered. Need to make your apps extensible? Yep, got that covered. Need to build your views declaratively? Yep. Want to construct your complex designs in a componentised way, with routing in between? Yep. You get the picture.
And have I mentioned JavaScript? Well of course not, it almost goes without saying. JavaScript is the water that flows through the channels in the browser; for many, it's the new assembler, the new ultimate compilation target. And UI5 is a JavaScript toolkit.
There's a lot of navel gazing out there right now about web toolkits and frameworks not being "native" enough, not being JavaScript-y enough. Frankly, I don't understand that. The whole point of a framework, of a toolkit, is to make you more productive. And to do that by providing abstractions and mechanisms that allow you to get things done, to build responsive user interfaces and interact with data in backend systems, while not tripping you up or getting in your way.
So to build with UI5 is to build using JavaScript, but it's not the full story. It's understanding and properly wielding MVC. It's understanding how to build applications where your application logic is separated from your view definitions. And it's understanding where the joins are. It's also understanding how to build an application that allows a user to get on with the task in hand. They have a role, they have a task to perform, and they want to carry it out with as little fuss as possible.
But it's also understanding where the data comes from, and where the frontend meets the backend.
And that's where OData comes in. OData is a protocol and a format. Folks like to say that OData came from Microsoft, but the truth is actually a lot more interesting. It came from RSS, or rather, from the broken community that was borne out of a specific person trying to own the space (and failing).
The Atom syndication format was a potential replacement for what we knew and loved as RSS. It was designed to represent things. Blog post things, initially. Collections of things, feeds of entries, sets of entities. And then came a RESTful protocol to go hand in hand with that syndication format - the Atom Publishing Protocol. This protocol, APP for short, gave us the ability to manipulate those things, those entries, those entities. Create them, read them, update them, delete them, and query for them. Sounds familiar? Yes, of course, I'm describing the OData CRUD+Q operations.
SAP adopted OData as a standard a few years ago, when they finally saw the light, and started looking for something to counter the onset of the WS-Deathstar syndrome, that was being brought on by the sheer weight of complexity that enterprise web services was imposing on the stack.
It's as near to a REST framework as they could manage; although, in fact, there's no such thing as a REST framework. Like Fiori, REST is an approach, an architectural style, a philosophy, as much as anything else.
So where does that leave us? At a high level, we need to know about UI5 and OData. But there's more to it than that. Not to mention the question of whether you want to become a "full stack" Fiori developer, or just a frontend developer, or even just a backend developer. And if a backend developer, a full stack backend developer, or someone who focuses on "just" the business logic, or "just" the OData parts. There isn't enough time to cover all of that, but I'm sure you can extrapolate downwards into the data roots.
But for your journey to become a Fiori app developer, knowing you need UI5 skills is not enough. UI5 is a large and multi-faceted thing. How do you wield it? How do you work out what bits you need to master? And the same goes for OData.
If you're wanting to take your first steps on that journey, then I encourage you to read 10 tips to get you started on your Fiori development journey.
Fiori is a great initiative, and it's supported right now at the bare metal level with a superb toolkit, UI5. That toolkit has had years and years of passion, love, experience (and blood sweat and tears) baked into it by people far more talented than I am. So I do the only sensible thing, and embrace all that hard work and put it to work for me. You can too.
Following on from my previous post Can I build a Fiori app? Yes you can! here's a top ten list of tips for the next steps on your journey to become a Fiori developer.
Read the content of SAP Fiori Design Guidelines website. And then read it again. Fiori apps are successful at a UX level because of the consistent design that abounds.
The design didn't just happen by accident - Fiori apps don't look the way they do for a random reason. They're not immediately recognisable by pure chance. Everything, from the pixel-perfect precision of the design and the space between the design, is deliberate. Get that under your skin. Understand what the different application types are; know what a master/detail pattern is and what it's used for; get to grips with patterns and controls. Appreciate the use of filters, the placement of action buttons, and the subtleties of responsive design. And remember: often, less is more.
First, a factoid. The first real customers of the UI5 toolkit, and specifically the sap.m library and the controls therein (sap.m is one of the many libraries within the UI5 toolkit), were the internal teams of Fiori developers at SAP. They'd been tasked with building the first few waves of Fiori apps, and needed controls to satisfy the app designs. They needed visual building blocks with which to construct clean and consistent apps.
The superstars on the UI5 team in Walldorf and elsewhere - the designers and developers - built out the controls inside the sap.m library, specifically with those Fiori developer teams in mind. Fiori is built with controls in the sap.m library.
Yes, of course, there are other controls that are also utilised, such as those in the sap.ui.layout library. And those are super important too (for an example, see the Grid control in this post: UI5 features for building responsive Fiori apps).
But the visible building blocks that are used to construct a Fiori app come from the sap.m library. These days the "m" stands for "main"; originally it stood for "mobile", as a reference to the responsive nature of these controls.
How do you go about getting to know sap.m library controls? Start with the Explored app in the UI5 SDK; it's a super resource that gives you real examples of how the controls can and should be used, and you can say "show me the code" too.
These two concepts from the UI5 toolkit are essential for building non-trivial apps properly. Arguably, you can build Fiori apps without these two concepts, but you won't be able to include them in the Fiori Launchpad, and you won't be able to navigate to them from other Fiori apps.
The Component concept is fairly straightfoward, but the implications are subtle and wide-ranging. The essential mantra is "think local, not global". A proper Fiori app should be self-contained, not refer to global mechanism such as the UI5 runtime core and the central event bus. Each component has its own event bus, as well as its own router and routing definitions.
If you examine the details of the app that accompanies the UI5 Application Best Practices guide in the SDK, you'll find examples of how to build using Components and routing.
We've already come across MVC so I don't need to say too much there. So what do you need to do? Learn how the structure of an app is built, using screen-sized and invisible controls, and how the views within that structure are related to each other and to their controllers.
And love it or loathe it, XML is in your future. All standard SAP Fiori apps have their views defined declaratively in XML. And you'll quickly find out why - it's the most concise, efficient and clean way to do it. Not a fan of XML? Get over it.
Everyone will have their favourite editor, their favourite development environment in which they're most productive and where they can comfortably build Fiori apps. Sublime Text, vim, WebStorm, Atom, even (for the masochists) Eclipse!
SAP's WebIDE might not be that favourite environment. But it's got a lot of things going for it, and you don't have to make it your main environment.
Use the WebIDE to kick start your Fiori development journey. Extract and examine the reference apps, which have been placed there by the folks in the Fiori Implementation Experience (FIX) team.
Begin developing a new Fiori app from one of the starter templates, or even starting from one of the reference apps. A Fiori app has a lot of moving parts; if you're just starting out, getting help getting those moving parts going and working well together is worth a lot.
If you've developed in the past within the soft padded walls of an ABAP stack, you've had everything done for you. Or done to you, depending on your perspective. You didn't have to think about your editor, about version control, about syntax highlighting and linting, or even about serving your app up for testing. study-existing-fiori-apps That was (and remains) the old world. Developing apps for the web is new. This is not some inside-out based development where you create your UIs inside of an ABAP stack and then push them out to be rendered in the target browser. This is grown-up outside-in development where you're developing directly for the new runtime - the browser.
There are plenty of guides showing how you can set your own development environment up and get your development workflow going. Find one that suits you and get going with it as soon as you can.
Northwind is the well-known reference OData service that's out there and available. This tip is not necessarily about the Northwind OData service per se; it's more about making yourself (a) familiar with OData and how it works, and (b) doing that in a way that's independent of any backend SAP system. In light of this, getting to know the Mock Data Server mechanism, which is also part of the UI5 toolkit, is also essential.
Yes of course you're going to want to build Fiori apps that consume data from an SAP backend, and that also means OData. But that can sometimes be quite an expensive goal in the early days; it might be that the OData service isn't ready, or you haven't got access to it, or you're just on a train trying to get something done in your local development environment and can't get connected to that backend OData service. You can accelerate your journey along the Fiori development learning curve by being independent of any specific backend system. By being self-contained.
Building ABAP based solutions, you'll know that the debugger is a powerful ally. The Chrome Developer Tools, along with the UI5 Support Tool are the equivalent, and more, in this new world.
You're using Chrome, right? That pretty much goes without saying; it's just as if not more important than your editor; in fact, it is becoming the editor.
Get to know how to wield the superb development, debugging and tracing features of the Chrome Developer Tools; understand what the UI5 Support Tool can offer you. If you do nothing else today, hit Ctrl-Alt-Shift-S on a running Fiori app and have a look at the Control Tree panel.
Data binding is where the frontend meets the backend. Master it. Understand the nuances of object, property and aggregation bindings; learn the subtleties and features of complex embedded binding syntax, how to specify sorting, filtering, grouping, formatting and factory functions.
A lot of what you might think is achieved through imperative code in controllers is in fact achieved through declarative binding. Don't be scared of it, it wants to be your friend. One thing I'll say here, which is only partly true but something that will help you as you bear it in mind: If you find yourself making explicit OData calls exclusively, it is possibly a bad code smell. Not all the time, but there's a chance.
This tip of course is possibly the most important, and the most generic. If you want to learn, improve your skills in, or eventually master something, the one thing you cannot afford to avoid is reading. Looking at existing examples of what you're trying to learn to build. Understand how to get at the non-minified sources of standard SAP Fiori apps. Look at the templates and the reference apps in the WebIDE.
Make your breaks work for you by poring over other people's code while you pour the hot water over your ground coffee. And yes, not everything you read will be great examples of the Fiori app art. Remember that the folks who wrote the Fiori apps are just like you and me; they've just had a bit of a head start, that's all. It won't all be gold standard code. But even the bad code is useful to read; find patterns and anti-patterns and learn from those.
Of course, if you ask others how you go about learning to build Fiori apps, it's likely that they'll have other tips too. But I'm pretty sure that these will be the common denominators. And they're all things that have helped me on my journey. Happy travels!
Building responsive apps in UI5 starts with using appropriate controls. The majority of the controls that were created from the ground up to be responsive are to be found in the sap.m library. The "m" in "sap.m" originally stood for "mobile" but now stands for "main", reflecting the key focus on responsive design for the UI5 toolkit.
But using these controls is just the start. Making an app properly responsive means paying close attention to the device capabilities and making design and runtime decisions appropriately. This document outlines some of the main facilities in this regard. The examples are mostly based upon the Explored app in the UI5 SDK.
The Split App (sap.m.SplitApp) "maintains two NavContainers if runs in tablet and one NavContainer in smartphone. The display of master NavContainer depends on the portrait/landscape of the device and the mode of SplitApp". In other words, it does different things depending on the device.
You can see this in action if you examine the control tree when running on a non-phone and when running on a smartphone.
These screenshots are from the UI5 Support tool's control tree display for the Explored app, which uses a Split App control within the view that is returned from the Component's createContent method.
Often declared on a Component during initialisation (or in the createContent method), this is a client-side (JSON) model with pre-defined boolean values relating to device information returned from the Device API. These values can be used in declarative views to set control visibility depending on the device, for example.
In the Explored app, the device model is declared and set in the Component's createContent method. It is used declaratively, for example, to control whether the Icon Tab Filters are initially expanded or collapsed in the entity view.
UI5 has a device API which can be queried to find out the device type and more. The API is used to build the values in the Device Model, but also used directly in controller logic. An example of this direct use can be seen in the sample controller.
The Grid mechanism is a control that is found not within the sap.m library, but within the sap.ui.layout library. It "is a layout which positions its child controls in a 12 column flow layout. Its children can be specified to take on a variable amount of columns depending on available screen size".
Using the Grid control, and specifying layout data within the layoutData aggregation of each control you place within the Grid, you can define once, for multiple screen size scenarios, a flexible flow based layout that will respond as the screen size alters.
In the Explored app, the Grid - Tile-based Layout sample shows how the Grid works. A number of tiles (in the form of Object List Item controls) are defined within the Grid. Each of them has layout data specified either via the Grid's defaultSpan property or specifically with the span property of an aggregated GridData control. This layout data specifies how many of the 12 columns a control should span, in large (L), medium (M) and small (S) screen circumstances.
In the sample referenced above, the smaller tiles have layout data of "L4 M6 S6", meaning that on a large screen each will span 4 columns (meaning there will be three on any given row), otherwise they'll span 6 columns (meaning there will be two on any given row). The larger tiles (the "Deskjet Super Highspeed" and the "Power Projector 4713" in the screenshots below) have layout data of "L6 M12 M12", meaning that on a large screen each will span 6 columns (two on any given row) and on medium and small screens each will span 12 columns (one on any given row). You can see the effect here:
For more information on the 12 column grid, see Johannes Osterhoff's post Responsive Web Design.
In many cases, simply turning off the display of a control, or part of a control, is all that's needed to improve the way an app is displayed on different sized devices. Controls inherit from sap.ui.core.Control which has a visible property. With this, you can turn off the display of pretty much any control programmatically.
One technique is to do this imperatively, in a controller function, depending on circumstances. Alternatively, and this is often used in conjunction with properties in the Device Model, you can do it declaratively, in the view definitions. An example of this is for the navButton of a Page control that is aggregated (via a View) into the detailPages of the Explored app's Split App.
The Page in question is the "Not Found" page, shown when (usually via direct URL manipulation) the specified control cannot be found:
The navButton is the left-arrow shown here. When running on a desktop or tablet, there's no need to display a navButton to navigate back to the master, as the master is still on show (the Split App is displaying it in the left hand third of the screen). But on a smartphone, with a single Nav Container, only either a detail page or a master page is shown, meaning that some sort of button is required to navigate back to the master if a detail page is shown.
This is achieved by binding the value for the Page control's showNavButton property, which is to be a boolean, to the (boolean) value of the isPhone property of our Device Model, which, by dint of the value derived from the Device API, will cause the navButton to be shown on a smartphone but not on a desktop or tablet. You can see this in the notFound.view.xml definition.
When designing responsive Table controls, each Column control can be declared with a minimum screen width that is required have the values in that column. If the minimum width is not met, the column is not displayed.
In the Explored app, the Description column in the Samples Icon Tab Filter is subject to a minScreenWidth value of "Tablet" (a pre-defined device size), below which it is not displayed. Here we can see the display with and without the column:
Instead of the display of a column being simply supressed, it can be "popped in" instead. This means that it will be displayed underneath the rest of the columns, still within the logical row / record display. The popin behaviour will kick in if the minimum screen width for a Column is not met.
In the Explored app, a control's properties are displayed in a "Properties" Icon Tab Filter. Two Column controls in the Table have values specified for the demandPopin property - those for Description and Since. If the restricted width requires, the Description column will be popped in (demandPopin="true"), and the Since column will disappear (demandPopin="false"):
With the related Column property popinDisplay you can control how the popped in column should appear.
Sometimes a declarative approach to handling responsive design isn't enough, and you need to fork the design depending on what device the app is running on. An example of where you might want to do this is in the display of items in a list. On a device with enough screen real estate you might want to use an Object List Item as a template for each item. Used on a smartphone, this control will mean that you may only get a small number of items displayed before you have to scroll. More appropriate might be a control that takes up less vertical space, such as a Standard List Item.
Instead of a template, you can use a specify a factory function in the aggregation binding of a control. You can then implement this factory function to use the Device API to determine the current device type, and dynamically return the control appropriate for that device.
While the Explored app doesn't use factory functions, there is an example in the upcoming OpenUI5 Course that shows how they're used. In Episode 6 "Custom Sorting, Factory Functions and XML Fragments", a factory function is used to return a different item template depending on the category of the item in the data. The reference to this factory function is made declaratively in the embedded binding of the List control's items aggregation.
When specifying screen sizes for the minScreenWidth property, for example, you can use various standard CSS size references. There is also, however, a Screen Size enumeration which abstracts some standard sizes away for you. It is one of these enumerations - Tablet - that is used throughout the Explored app's entity view declaration.
As you can see, the responsive features and facilities offered by the UI5 toolkit are many and varied, but at the end of the day it is up to you as a designer and / or developer to wield these features in the most appropriate way possible, to implement your responsive Fiori apps.
Share and enjoy!
Iām continuing my journey spreading the word about Fiori and UI5. Last November I was in Sydney giving a locknote, a workshop and speaking at an executive lunch, at the SAP Architect & Developer Summit. A couple of weeks ago it was a short trip to Brussels to speak about OpenUI5 at FOSDEM, and now at Mastering SAP I have three slots. Hereās the description of each of them.
Keynote: Can I Build a Fiori App? Yes You Can!
Fiori is not just the new UX-focused, role-based application paradigm from SAP, itās also a set of technical constraints coupled with a rich but finite set of design patterns for UI. Most importantly itās made possible by certain parts of the SAPUI5 toolkit that were specifically built with Fiori in mind. (In fact, the first customers of the sap.m library in SAPUI5 were the SAP Fiori developers themselves). This session tells you what you need to know to build a Fiori app.
Tips & Tricks from the Trenches of a Fiori/UI5 Developer
Developing Fiori and UI5 apps with the UI5 toolkit is different than what youāre used to. Different generally because itās HTML5 based, and different specifically because itās UI5. Learn the tips and tricks that I use on a daily basis, and get to know how to drive, modify and extend Fiori/UI5 apps from the command line console of Chromeās developer tools. Master UI5 debugging and maintenance from within the browser and get a step ahead.
Workshop: Building an SAP Fiori-like App From (Almost) Scratch ā Hands On!
Starting from a skeleton app that has a structure but minimal content, and an OData or JSON data source, we build together a working Fiori app with SAPUI5. We cover bootstrapping the SAPUI5 toolkit, the Component-based approach to development, Model-View-Controller based development, XML views, navigation, data binding, model operations and more. This is similar to the Open Source Convention (OSCON) hands-on session I co-presented in Portland, June 2014, and the CD168 hands-on sessions I co-built & co-presented at SAP TechEd in 2013 (which were sold out / overbooked many times).
Perhaps Iāll see some of you there. In any case, share & enjoy!
]]>2015
Fiori Apps Reference Data into a SpreadsheetĀ 09 Jan 2015
Pulling the Apps info from the OData service used by the SAP Fiori Apps Library app into a Google spreadsheet. More info here: Fiori App Data into a Spreadsheet? Challenge Accepted!
2014
YAML Model for UI5Ā 20 Dec 2014
I scratched an itch and built a simple YAML Model implementation for UI5. More info here:Ā https://github.com/qmacro/YAMLModel
Creation & Reload of UI5 UIs in the Chrome Developer ConsoleĀ 24 Nov 2014
Following my workshop session at the SAP Architect & Developer Summit, this screencast shows the creation of a quick UI, using the manual Chrome Developer Console techniques we learned, and the subsequent export and reload as XML. (I recorded this at Sydney airport on the way back from the summit).
SAP Fiori Rapid Prototyping: SAP Web IDE and Google DocsĀ 05 Nov 2014
With the power of the SAP Web IDE and its plugin / template architecture, we can create custom templates that allow you to create Fiori apps based on all sorts of data sources, in this case, a Google Spreadsheet.
SAP Web IDE Local Install ā Up and RunningĀ (3-video playlist) 27 Oct 2014
SAP made available its Web IDE as a locally installable service in Oct 2014. This short series of videos shows you how to get up and running with it.
SAP Fiori & UI5 Chat, Fri 17 Oct 2014Ā 17 Oct 2014
Brenton OāCallaghan and I have a 30 min chat about SAP Fiori and the new, unofficial SAP Fiori App that gives information about the available SAP Fiori Apps.
UI5 Icon FinderĀ 14 Sep 2014
A very quick screencast of an āIcon Finderā app that remembers your own word associations you make, so you can more easily find the icons next time.Ā See Scratching an itch ā UI5 Icon Finder for more info.
OpenUI5 MultiComboBox First LookĀ 25 Jul 2014
A first look at the sap.m.MultiComboBox in OpenUI5 version 1.22. Note that the addition of a key for the root element is not entirely necessary (but probably what you might want). I wrote more about this here Keyed vs Non-Keyed Root JSON Elements & UI5 Binding.
The SAP Fiori App Analysis applicationĀ 30 Jun 2014
A short overview of the SAP Fiori App Analysis app, written itself as a Fiori style app. In this overview I show the source for the information (the SAP help documentation), mention how I convert the gathered spreadsheet data into a more easily consumable form, and explore the app a little bit too.
DSON (Doge Serialized Object Notation) Model mechanism for UI5. So model, wow!Ā 06 Jun 2014
AĀ bit of fun for a Friday late afternoon, and it helped me to play about with extending existing client models in the sap.ui.model library set.
Manipulating UI5 Controls from the Chrome Dev ConsoleĀ 17 May 2014
Just a quick screencast to show how easy it can be to find, grab, manipulate and create controls in a UI5 application from the Chrome Developer Console.
Simple Workflow App with UI5Ā 13 Apr 2014
This is a quick screencast of the app I wrote for the basis of my chapter in the SAP Press book Practical Workflow 3rd Edition.
Coding UI5 in JSBinĀ 11 Apr 2014
A quick recap of what we did during the UI5 Mentor Monday in March 2014, showing how easy it is to construct good looking UIs with UI5, and also the great facilities of JSBin ā creating, viewing and sharing HTML CSS and JavaScript snippets with live rendering.
Mocking Up the Payroll Control Center Fiori AppĀ 14 Feb 2014
There was a blog post on the Payroll Control Center Fiori app and I decided to mock the UI up directly. This video shows me doing that.
#UI5 Control on the Screen, QuickĀ 12 Feb 2014
There was a conversation about how fast you could get a UI5 control on the screen. I decided to try to see how fast it could be.
Using Gists on Github to share code 09 Jan 2014
This screencast relates to a document on SCN āHelp Us To Help You ā Share Your Codeā that describes how and why you should share your code that you want help with, using Gists on Github.
2013
SublimeUI5 ā Snippets & Templates for SAPUI5/OpenUI5 19 Dec 2013
SublimeUI5 is a package for Sublime Text 2 for developing SAPUI5 / OpenUI5 applications. There are two parts to it ā a series of snippets and some basic application templates to help you quickly get started with complete MVC-based running apps. (I donāt maintain this package any more but youāre welcome to take it over!).
SAPUI5/Fiori ā Exploration of an App 22 Nov 2013
An exploration of a custom Fiori app and the SAPUI5 controls that are used to build it. For the full context of this, see the 2-partĀ SAP CodeTalk ā SAPUI5 and FioriĀ playlist.
SAP Fiori-style UI with SAPUI5 06 Nov 2013
This was a short video to show the sort of thing that attendees of ourĀ SAP TechEd 2013 Session CD168 āBuilding SAP Fiori-like UIs with SAPUI5ā would be building.
2012
Re-presenting my site with SAPUI5 13 Dec 2012
A short demo of an experiment I carried out to explore SAPUI5 and learn more about it. I really like the Shell component as a user interface paradigm and decided to see if I could re-present the content of my home page and weblog using SAPUI5 and specifically the Shell. It seemed to work out OK.
Stupid Firebase and SAPUI5 tricks 14 Apr 2012
Saturday evening hacking around with Firebase, the command line, and an SAPUI5 Data Table. Fairly pointless but interesting nonetheless. Firebase uses websockets for realtime data streaming to your browser-based app, and you can interact with the JSON data with HTTP. Every piece of data has a URL. Now thatās nice.
SAPUI5 and OData 12 Feb 2012
I looked at the data binding support for OData based data sources (such as those exposed by SAP NetWeaver Gateway!) in the (then relatively) new SAPUI5 toolkit. I also wrote this up on SCN: āSAPUI5 says āHello ODataā to NetWeaver Gatewayā.
Iām not 100%, so earlier this morning I needed a bit more time than usual to get my brain in gear. So with a coffee I decided to spend a few mins hacking the list display. Itās not a permanent solution of course, but at least demonstrates that there are properties in the Apps entity that can be used to distinguish the āduplicateā entries. Should be a quick fix (yes, pun intended!) for the Fiori Implementation eXperience folks to carry out.
Hereās a quick screencast as an animated GIF (why not?)
Right, on with the morning.
]]>I have attended FOSDEM a number of times over the years; in the early days in my capacity as a Jabber / XMPP programmer, and these days more generally, but this time it was specifically about OpenUI5.
SAPās open sourced UI development toolkit for HTML5 is SAPUI5ās twin with an Apache 2.0 licence. SAPUI5 is the basis for SAPās UI innovation and with what SAP Fiori apps are built. Although still relatively young, itās aĀ very accomplished toolkit, and one I was eager to share with the open source developer community at large.
I spoke on the subject of OpenUI5 in this talk:
A Whirlwind Introduction to OpenUI5
and it was very well received; the room was packed and there was some great feedback. It wasnāt difficult talking about a product from such a great team, although to add extra spice, Iād decided not to use anyĀ slides, and instead, do some live coding on stage.
I had deliberately set myself up for a fall, showing how difficult it can be to built hierarchies of controls in JavaScript, which then set the scene for my favoured approach of defining views declaratively, with XML being my favourite flavour in that vein. Luckily the audience seemed to appreciate the in-joke, and not everyone thought I was an idiot :-)
Iāve made the source code of the app I built on stage available on Github, in the fosdem-2015-openui5 repository.
The SAP folks had an OpenUI5 booth at FOSDEM too which was staffed by a fewĀ of the real UI5 developers, so the conference attendees were able to learn first hand about the toolkit from the source. The booth saw a nice increase in traffic after my talk, which is a great sign.
SAP has had a presence at many conferences in the past few years, but this one resonates particularly with me, as some folks might think that the SAP and open source worlds are far apart. How wrong they are.Ā Onwards!
Picture credits: Denise Nepraunig, Jan Penninkhof, Martin Gillet ā thanks!
]]>Between Stafford and Wolverhampton I hacked around with the source, particularly the html.cson file that contained a number of UI5 snippets for HTML files. From where was this error message emanating, and why now?
Well, it seems that a few days ago, in release 0.171.0, Atom had moved from parsing CoffeeScript Object Notation (CSON)Ā withĀ cson, to parsing withĀ cson-safe. CSON is the format in which snippets can be written. Moving to cson-safeĀ meant that the parser was rather stricter and was the source of the āSyntax error on line 4, column 11: Unexpected tokenā error.
By the time weād got to Birmingham, Iād figured out what it was: tabs.Ā In wanting to move in the direction of the UI5 coding standards, Iād started moved to tabs for indentation within the UI5 snippets, as you can see in this openui5 starter snippet. While the original cson parser used by Atom was fine with real tabs in the snippet source file, cson-safe didnāt like them.
Switching the tabs to literal ā\tā tab representations (i.e. backslash then ātā) solved the issue.
]]>Today there was a comment in the announcement post, asking whether there was āany way this information can be supported via a downloaded (into Excel perhaps)?Ā It would make sorting and filtering much easierā.
Seeing as one of the technical guidelines for Fiori apps is the use of an OData service to supply the domain data, and I had a bit of time over lunch, the well known phrase āChallenge Accepted!ā floated into my consciousness.
With the power of OData, JSON, Google Apps Script and the generally wonderful cloud productivity platform that is Google Apps, I set to work, and within a short amount of time, the challenge was completed.
Hereās a video with all the details. The spreadsheet is here,
Share and enjoy!
]]>Iāve been using a Garmin Forerunner 110 watch which has been very good, on the whole, although the USB cable and connectivity left something to be desired.
I bought my wife Michelle a TomTom Runner Cardio for her birthday back in August, and have been intrigued by it ever since. And she bought me one for Christmas, so Iām trying that out for 2015. I went out on my first run of this year with it just this morning, in fact.
But back to 2014. I completed 101 runsĀ (1,281.22km) and theyāre all logged in Endomondo. I donāt have the premium subscription, just the basic, but the features are pretty good. Thereās an option to upload from the Garmin watch, via a browser plugin which (on this OSX machine) has become pretty flakey recently and now only works in Safari, but once uploaded, the stats for each run are shown rather nicely:
Endomondo also offers simple statistics and charts, and a tabular overview of the runs, that looks like this:
One thing that bothered me, at least with the free service, is that there was no option to download this data. So I paged through the tabular data, and copy/pasted the information into a Google Sheet, my favourite gathering-and-stepping-off point for a lot of my data munging.
If nothing else, as long as the data is largely two dimensional, Iāve found itās a good way to visually inspect the data at 10000 feet. It also affords the opportunity for some charting action, so I had a look at my pace over the year, to see how it had improved. This is the result:
The threeĀ peaks in Feb, Jun and Sep are a couple of initial runs I did with Michelle plus her first 8km in London (now sheās in double km figures and has a decent pace, Iām very proud of her).
I could have gone further with the analysis in the spreadsheet itself, but Iām also just starting to try andĀ teach myselfĀ Clojure,Ā and thought this would be a nice little opportunity for a bit of data retrieval and analysis.
Of course, the first thing to do was to make the data in the Google Sheet available, which I did with my trusty SheetAsJSON mechanism. It returned a nice JSON structure that contained all the data that I needed.
So now I had something that I could get Clojure to retrieve. Here followsĀ someĀ of what I did.
Iām using Leiningen, which is amazing in a combined couple of ways: it Just Works(tm), and it uses Maven. Ā My only previous experience of Maven had me concluding that Maven was an absolute nightmare, but Leiningen has completely changed my mind. Although I donāt actually have to think about Maven at all, Leiningen does it all for me, and my hair is not on fire (for those of you wondering, Leiningenās tagline is "for automatingĀ Clojure projects without setting your hair on fire", which I like).
So I used Leiningen to create a new application project:
lein new app running-stats
and used my joint-favourite editor (Vim, obviously, along with Atom), with some super Clojure-related plugins such as vim-fireplace, to edit the core.clj file. (more on my Vim pluginsĀ another time).
Hereās a short exerpt from what I wrote:
Letās look at this code step by step.
Iām using Clojureās data.json library (line 2) to be able to parse the JSON that my SheetAsJSON mechanism is exposing. Iām also using the clj-http HTTP client library (line 3) to make the GET request. Finally Iām using the clojure.walk library (line 4) for a really useful function later on.
I decided to churn through step by step,Ā which is why youāre seeing this code in four chunks, each time using the def special form toĀ create a var in the current namespace.
Thereās stats (line 6), which has the value of the parsed JSON from the body of the response to the HTTP GET request. To unravelĀ lines 6-9 we have toĀ read from the inside outwards.
First, thereās the call to client/get in line 9 (the clj-http library is aliased as client in line 3). This makes the HTTP GET request and the result is a Persistent Array Map that looks something like this:
running-stats.core=> (client/get "http://bit.ly/qmacro-running-2014") {:cookies {"NID" {:discard false, :domain ".googleusercontent.com", :expires #inst "2015-07-05T12:23:49.000-00:00", :path "/", :secure false, :value "67=EUPTfvAv3U5Vofm1F3Fb_D9OjmwYS1yC3Ju-uvgostmqzKSNLHKHiHGMc-cwFBAES0R3qcLFQW7W75x6sZjSzein3H7Trxeg6Bk0wOJ0q-AaYXA0RxYw0-uEhR5ogaXg", :version 0}}, :orig-content-encoding nil, :trace-redirects ["http://bit.ly/qmacro-running-2014" "https://script.googleusercontent.com/macros/echo?user_content_key=jmP [...] 5, :status 200, :headers {"Server" "GSE", "Content-Type" "application/json; charset=utf-8", "Access-Control-Allow-Origin" "*", "X-Content-Type-Options" "nosniff", "X-Frame-Options" "SAMEORIGIN", "Connection" "close", "Pragma" "no-cache", "Alternate-Protocol" "443:quic,p=0.02", "Expires" "Fri, 01 Jan 1990 00:00:00 GMT", "P3P" "CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."", "Date" "Sat, 03 Jan 2015 12:25:02 GMT", "X-XSS-Protection" "1; mode=block", "Cache-Control" "no-cache, no-store, max-age=0, must-revalidate"}, :body "{"Year-2014":[{"Date":"2014-01-02T00:00:00.000Z","Description":"First run of 2014","Distance":"13.50 km","Time":"1h:13m:51s","Avg_Speed":"11.0 km/h","Avg_Pace":"5:28 min/km","Avg_HR":168,"Distance_Value":13.5,"Pace_Value":"1899-12-30T05:28:00.000Z","Pace_Val_Mins":5,"Pace_Val_Secs":28,"Pace_In_Secs":328,"Month_of_Run":1},{"Date":"2014-01-05T00:00:00.000Z","Description":"Wet and windy Copster Hill.","Distance":"14.05 km","Time":"1h:16m:31s","Avg_Speed":"11.0 km/h","Avg_Pace":"5:27 min/km","Avg_HR":169,"Distance_Value":14.05,"Pace_Value":"1899-12-30T05:27:00.000Z","Pace_Val_Mins":5,"Pace_Val_Secs":27,"Pace_In_Secs":327,"Month_of_Run":1},{"Date":"2014-01-08T00:00:00.000Z","Description":"Brookdal [...]
Quite a bit of a result. Looking at the keys of the map, we see the following, which should be somewhat familiar to anyone who has made HTTP calls:
running-stats.core=> (keys (client/get "http://bit.ly/qmacro-running-2014")) (:cookies :orig-content-encoding :trace-redirects :request-time :status :headers :body)
There we can see the :body keyword, which we use on line 8Ā as an accessor in this collection. With this, we get the raw body, a string, representing the JSON:
running-stats.core=> (:body (client/get "http://bit.ly/qmacro-running-2014")) "{"Year-2014":[{"Date":"2014-01-02T00:00:00.000Z","Description":"First run of 2014","Distance":"13.50 km","Time":"1h:13m:51s","Avg_Speed":"11.0 km/h","Avg_Pace":"5:28 min/km","Avg_HR":168,"Distance_Value":13.5,"Pace_Value":"1899-12-30T05:28:00.000Z","Pace_Val_Mins":5,"Pace_Val_Secs":28,"Pace_In_Secs":328,"Month_of_Run":1},{"Dat [...]
Now we need to parse this JSON with the data.json library, which we do in line 7.Ā This gives us something like this:
running-stats.core=> (json/read-str (:body (client/get "http://bit.ly/qmacro-running-2014"))) {"Year-2014" [{"Pace_Val_Secs" 28, "Distance_Value" 13.5, "Date" "2014-01-02T00:00:00.000Z", "Month_of_Run" 1, "Description" "First run of 2014", "Distance" "13.50 km", "Avg_Speed" "11.0 km/h", "Pace_Val_Mins" 5, "Pace_Value" "1899-12-30T05:28:00.000Z", "Avg_Pace" "5:28 min/km", "Time" "1h:13m:51s", "Pace_In_Secs" 328, "Avg_HR" 168} {"Pace_Val_Secs" 27, "Distance_Value" 14.05, "Date" "2014-01-05T00:00:00.000Z", "Month_of_Run" 1, "Description" "Wet and windy Cop [...]
which is eminently more useable as itās another map.
Although itās a map, the keys are strings which arenāt ideal if we want to take advantage of some Clojure idioms. I may be wrong here, but I found that converting the keys into keywords made things simpler and felt more natural, as youāll see shortly.
Lines 11-13 is where we create the Year-2014 var, representing the data set in the main spreadsheet tab.
Looking up the āYear-2014ā³ key in the stats (line 13) gave me a vector, signified by the opening square bracket:
running-stats.core=> (stats "Year-2014") [{"Pace_Val_Secs" 28, "Distance_Value" 13.5, "Date" "2014-01-02T00:00:00.000Z", "Month_of_Run" 1, "Description" "First run of 2014", "Distance" "13.50 km", "Avg_Speed" "11.0 km/h", "Pace_Val_Mins" 5, "Pace_Value" "1899-12-30T05:28:00.000Z", "Avg_Pace" "5:28 min/km", "Time" "1h:13m:51s", "Pace_In_Secs" 328, "Avg_HR" 168} {"Pace_Val_Secs" 27, "Distance_Value" 14.05, "Date" "2014-01-05T00:00:00.000Z", "Month_of_Run" 1, "Description" "Wet and windy Copster Hill.",
The vector contained maps, one for each run. Each map had strings as keys, so in line 12 I used the keywordize-keysĀ function, from clojure.walk, to transform the strings to keywords. Hereās an example, calling the function on the map representing the first run in the vector:
running-stats.core=> (keywordize-keys (first (stats "Year-2014"))) {:Pace_Value "1899-12-30T05:28:00.000Z", :Month_of_Run 1, :Distance_Value 13.5, :Distance "13.50 km", :Avg_HR 168, :Avg_Pace "5:28 min/km", :Pace_Val_Mins 5, :Pace_Val_Secs 28, :Date "2014-01-02T00:00:00.000Z", :Description "First run of 2014", :Time "1h:13m:51s", :Avg_Speed "11.0 km/h", :Pace_In_Secs 328}
I assigned the resulting value of this call to a new var Year-2014.
The Garmin Forerunner 110 measures heart rate (HR) via a chest strap, and an average-HR detail is available for each run:
running-stats.core=> (:Avg_HR (first Year-2014)) 168
There were a few runs where I didnāt wear the chest strap, so the value for this detail on those runs was a dash, rather than a number, in the running statistics on the Endomondo website, which found its way into the spreadsheet and the JSON.
running-stats.core=> (count (filter (comp not number?) (map :Avg_HR Year-2014))) 6
Yes, six runs altogether without an average HR value. So to get the real average HR values, I just needed the ones that were numbers. I did this on lines 15-17.
By the way, composing with the comp function sort of makes me go āwowā, because I figureĀ this is revealing a bit of the simplicity, depth and philosophy that lies beneath the scratch mark Iāve just made in the surface of functional programming in general and Clojure in particular.
I took the average of the HR values in line 21. This actually returned aĀ Ratio type:
running-stats.core=> (/ (reduce + HR-values) (count HR-values)) 15292/95 running-stats.core=> (type (/ (reduce + HR-values) (count HR-values))) clojure.lang.Ratio
This was interesting in itself, but I wanted a value that told me something, so I called the float function in line 20:
running-stats.core=> (float (/ (reduce + HR-values) (count HR-values))) 160.96841
(Yes, I know taking the average of averages is not a great thing to do, but at this stage Iām more interested in my Clojure learning than my running HR in 2014).
I did continue with my analysis in Clojure, but this post is already long enough, so Iāll leave it there for now. If you got this far, thanks for reading! I hope to teach myself more Clojure; there are some great resources online, and the community is second to none.
If youāre thinking of taking the plunge, Iād recommend it! Iāll leave you with a quote from David Nolen at the end of his talk āJelly Stains. Thoughts on JavaScript, Lisp and Playā at JSConf 2012:
ā[Dan Friedman and William Byrd] got me realising thereās a lot more left to play with in Computer Scienceā.
As I embark upon my journey in this direction, I realise thatās a very true statement. Itās like learning programming all over again, in a good way!
]]>āCodenamed Spartan, the new app will look much more like competitors Chrome and Firefoxā
which completely misses the major point in that itās not how IE looks, itās how it behaves.
Anyway, the sentence that most caught my eye was this one:
In the past the company has considered changing the name to separate the current browser from ānegative perceptions that no long reflect realityā
This very much reminds me of a passage from Douglas Adamsās Hitch Hikerās Guide To The Galaxy, specifically from Episode 11:
The problem of the five hundred and seventy-eight thousand million Lintilla clones is very simple to explain, rather harder to solve. Cloning machines have, of course, been around for a long time and have proved very useful in reproducing particularly talented or attractive ā in response to pressure from the Sirius Cybernetics marketing lobby ā particularly gullible people and this was all very fine and splendid and only occasionally terribly confusing. And then one particular cloning machine got badly out of sync with itself. Asked to produce six copies of a wonderfully talented and attractive girl called āLintillaā for a Bratis-Vogen escort agency, whilst another machine was busy creating five-hundred lonely business executives in order to keep the laws of supply and demand operating profitably, the machine went to work. Unfortunately, it malfunctioned in such a way that it got halfway through creating each new Lintilla before the previous one was actually completed. Which meant, quite simply, that it was impossible ever to turn it off ā without committing murder. This problem taxed the minds, first of the cloning engineers, then of the priests, then of the letters page of āThe Sidereal Record Straightenerā, and finally of the lawyers, who experimented vainly with ways of redefining murder, re-evaluating it, and in the end, even respelling it, in the hope that no one would notice.
Wonderful.
]]>The summit was the first of its kind to be organised by SAP, and judging from the feedback from the attendees there and then, combined with my own experience, it was a huge success. It was held over a two day period in the centre of a hotbed of SAP architect and developer talent, with folks converging on the wonderful Australian Technology Park in Sydney from all over the region, plus various additions from the UK, USA and elsewhere.
The Australian Technology Park was almost the perfect setting, being based on a centre of technology (heavy transport and industry) from the last millenium, a centre that proudly displayed historical, and some still-working physical artifacts, reminding me a lot of the Museum of Science & Industry back home in Manchester.
Falling directly after SAP TechEd Berlin, firmly within the SAP tech conference season, the summit attracted over 300 attendees. There were a number of reasons for this being a great event to attend ā the style was a sweet-spot between different sized events, it was priced well, and the content was just right.
There's the daddy of all SAP conferences, SAP TechEd && d-code, and then at the other end of the scale there are grass roots community-organised types such as CodeJams and InsideTracks. While the former extends, with the InnoJam pre-event, to more or less a week, and the latter often being single-day affairs, this summit hit the sweet spot in between, finding a great balance between time and content.
This was one of those unusual events where the travel and accommodation costs, even for those relatively local, were more than the event itself. (Because I was speaking, SAP covered my costs ā thank you!). This is significant; a price point of AUD 695.00 (around GBP 385.00) combined with the agenda means that it was hard to resist.
The two most important ingredients of course for any event are the people and the content, often going together. Here are just a few of the hands-on workshop items from the agenda:
With those sessions typical of the quality and content, given by those people, you know it's going to turn out well.
Just as significant as the agenda were the conversations to be had with the amazing folk that were there too. Trying to name them all would be an exercise in futility; suffice it to say that the large majority of what Iām going to call the āANZ SAP Mobā (in reference to the āDutch SAP Mafiaā) were there, which for me was reason enough to attend. To be able to learn from conversations with these people was priceless.
I was lucky enough to be able to contribute in three ways to this summit.
I gave a keynote at the end of Day 1 (called a ālocknoteā ā who knew?) entitled Fiori and UI5 Software Logistics, or: Are We in the Future Yet?.
My aim was to convey the idea that in the SAP development world, weāve been heretofore shielded from and largely unaware of one of the most important parts of software development ā the artifacts.
You could perhaps think of artifacts as the tangible results of our mental machinations, a developer currency that we grow, discuss, exchange and share. And with the advent of Fiori and UI5 development, we should think explicitly about how we should nurture these artifacts to be the best we can make them, and in doing that, embrace tools available outside the traditional SAP developer ecosphere. Tools such as linters, editors, workflow mechanisms and source code control systems. In particular, I focused on git and Github Workflow.
On Day 2 I held a 2 hour hands-on workshop entitled Learn to Drive Fiori Applications from Underneath and Level Up!.
In this workshop I took the attendees (the workshop was fully booked!) through a Fiori application, from underneath, discovering it, controlling it, driving it and modifying it via the Chrome Developer Tools. There was a lot of content to get through but we managed it, not least due to the fact that everyone got on board with the approach and really did a great job in collaborating and keeping up with me. Thanks folks!
When I was preparing the workshop booklet, it took on a life of its own, so much so that it turned into a standalone 48-page mini-publication so that anyone who had it could follow through everything I wanted to teach, even after the workshop. And Iām making that workshop booklet available to everyone so that they can all benefit:
Workshop Booklet: Learn to Drive Fiori Applications from Underneath and Level Up!
If you would like to leave any feedback please go ahead and do that at the end of this post.
On Day 1 there was an Executive Lunch event with folks from all around the Australia and NZ region. I spoke on the new development paradigm that Fiori and UI5 has ushered into the SAP developer world, and gave an impromptu demonstration on UI5 development, building a simple app as they ate their lunch :-)
I hear that SAP ANZ are planning to run this event again next year, which is great. The aim is to attract more attendees, which is also great. But there's also a balance to be maintained; the synergy, the timing of two full days, the ability to talk to everyone, was, in my opinion, just right.
Iām really hoping that this event has a future, stays roughly true to this inaugural incarnation, and spreads to other areas around the globe. If one came to Europe, Iād sign up immediately and encourage my fellow SAP hackers to do the same.
Well done to all the team for organising this, and thanks to all the superheroes that attended and shared knowledge and experience. We are architects and developers. Learning from one another and sharing with one another is what we do.
One of the strands of the conversation with Matt, Nigel James &Ā others was regarding the potential transient nature of the definition of views, or other smaller UI elements, created while working within the console. In the console you can quite easily build views in JavaScript (as the console is a JavaScript console!). Building machine readable, declarative views such as those based on XML or JSON is a little bit more cumbersome.
However, with a great feature of the UI5 Support Tool ā Export to XML ā we can indeed have our UI declared for us in XML, which is rather useful!Ā Not only that, but we can then iterate by loading that generated XML back into Chrome.
While at SYD airport just now, waiting for my flight back home, I recorded a quick screencast to illustrate this.Ā ItĀ shows the creation of a quick UI, using the manual console techniques we learned in my workshop.Ā Then,Ā the UI is exported as XML, which the Support Tool duly does for us, inside an XML View container. That exported XML View is thenĀ reloaded, and we can see of course that it is faithful to what was originally created.
Share & enjoy!
]]>If youāre emailing a group of people and addressing them directly in the body of the email, you should think about making sure theyāre in the TO list, not just the CC list. Itās common courtesy, and also youāll probably get a reply quicker :-)
]]>So this week of course was the beginning of what I call TechEd season or now I suppose we have to call it d-code season. Last week saw SAP TechEd and d-code take place in the wonderful Venetian resort in Las Vegas. Now that I am starting to get over my jet lag, DJ asked me to jot down a few of the highlights for this edition of āThis Week in Fioriā.
SAP Executive Keynote from SAP TechEd & D-code by Steve Lucas
The first mention has to go, of course, to the keynote with Steve Lucas. With a relaxed and developer focused atmosphere it was an incredibly enjoyable 75 minutes covering some of the amazing things that people are doing with SAP software. The reason it gets a special mention here is because it was (for me anyway) the first time I saw SAP Fiori on a watch. Steve introduces a real-time Fiori application on the HANA cloud platform which was integrated with and primarily used on Samsungās latest smart watches. The demo is at about 59+ minutes for anybody interested. In fact, the entire keynote is well worth the time as it really brought to life some of SAPās new technologies.
SAP Fiori Launchpad Overview by Aviad Rivlin
Aviad does it again with this excellent overview of the SAP Fiori Launchpad in this voice-over session. Talking us through the Launchpad itself, itās capabilities as well as an overview of how the launchpad could work in a Hybrid scenario where some functionality is based on-premise and some based in the cloud. Well worth a watch for anybody interested or working with the Launchpad.
Unified Inbox with SAP Fiori by Ramana Mohanbabu
This was quite an interesting session I enjoyed covering the connection of the SAP Fiori Unified inbox to multiple systems to give an end user access to all of their SBWP items in a much more usable way.
Swell Analytics by Clint Vosloo and Chris Rae
Although competing directly with myself and John Appleby during this years DemoJam I am always more than happy to give credit where it is most definitely due and these guys deserve it! They created an amazing application using OpenUI5 to identify, predict and rate the quality of swells for surfers (yes I said surfers!). Well worth a watch and shows off the awesome stuff you can build with OpenUI5. These are just 4 of the very many Fiori and OpenUI5 related sessions from Las Vegas that caught my eye. If I was to mention all of the sessions, you would get bored far quicker than I can type so I wonāt even try. But please do check out the rest of the sessions covering all SAPās new offerings from the HANA Cloud Platform (HCP), SAP Mobile Secure right through to the easy way developers can try all this out for themselves at hcp.sap.com.
So all that is left for me to do is thank DJ for allowing me to post on some of my experiences from Las Vegas. TWIF is an excellent series and one I love reading each week! Comments most welcome as always!
Brenton.
]]>Well hello again, Iām back. I couldnāt miss the most significant week number, now, could I? :-) And next weekĀ I have something special for youĀ āĀ the TWIF episode will be written by a guest author. Really excited about that! If youāre interested in becoming a guest writer for this series, get in touch! Ok, letās get to it.
SAP Portal and SAP Fiori ā Common Architecture by Aviad Rivlin Aviad has been at it again producing great content and bringing more clarity to this important subject. Although only a short post, itās worth mentioningĀ here, because it helps crystallise SAPās intentions in this space (readers of this TWIF series have seen many mentions of this subject in the past) and also because it points to a whitepaper āSAP Enterprise Portal and SAP Fiori ā Common Architecture Recommendationsā which is worth a read.
Whatās New in SAP Fiori Launchpad by SAP For theĀ UI Add-On for NetWeaver, otherwise known as UI2,Ā version 1.0 SPS 10 is now available. This is a layer of software that provides a lot of the Fiori services and infrastructure (yes, thereās more to Fiori than just UX, you know ;-) including the UI5 runtime, the personalisation services and the Launchpad. While the individual Fiori apps are of course the main event, without this layer, without the Launchpad, the experience would beĀ lacking something.
This Whatās New document, in the UI2 section of help.sap.com, gives us a good overview on what have been the important areas of focus for SAP in the recent period. Notably, these areas are for Portal integration (the headerless mode) and performance. With performance, there have been various improvements, from moving the storage of personalisation information from an XML document to database tables (who thought using XML documents for storage of large amounts of data was a good idea?) to cacheing of target mappings in the browser. Nice!
SAP Fiori, Demo Cloud Edition by SAP Well, it was a long time coming, and itās still not ideal, but itās THERE! An online, available, demo version of SAP Fiori. For folks to get a better feel for the Launchpad, for some of the apps, and to experience the UX first hand. Not only will this be great for all of that, but for those implementing their own Fiori apps, it will also serve as useful and hopefully always-available reference designs, alongside the SAP Fiori Design Guidelines I wrote about in TWIF episode 2014-28.
Why not ideal? Well, it only contains a very small number of apps from the 300+ available, and the sample data is a little flat. Here are the apps available:
Itās early days for this demo, and Iām hoping to see a lot wider variety of apps available, along with more meaningful sample business data, in the next iteration. But until then, so far so good!
SAP Fiori & UI5 Chat, Fri 17 Oct 2014 by Brenton OāCallaghan and me Earlier this year, Brenton and I ran a webinar āUnderstanding SAP Fiori Webinarā which was well received. I wrote it up in a post on SCN āThe Directorās Cutā and also on Bluefinās website āWebinar & More: Understanding SAP Fioriā,Ā and in fact weāll be running another SAP Fiori related webinar in December, watch this space!
Last Friday Brenton and I decided to sit down and shoot the breeze again on the subject of Fiori, this time looking at an SAP Fiori app that allows you to explore what Fiori apps are available. We looked at it from above, and from below, and had a great time doing so. Itās 30 mins long, so grab a cup of tea and a digestive biscuit and have a look: āSAP Fiori & UI5 Chat, Fri 17 Oct 2014ā
Until next time, share & enjoy!
]]>The agenda is packed with sessions Iām really looking forward to attending, and a huge list of amazing folksĀ from the SAP world are there ā many of them presenting. Check out the full agenda, available from this pageĀ to see what I mean.
Iām rather honoured to have been invited, and have a couple of speaking slots. Iāve added my sessions to the Lanyrd page for the conference, so the links below will take you to the slots there:
On day 1, Iām giving the Locknote Address, which is at the end of the dayās sessions and just before the cocktails, so Iād better keep it short and to the point!Ā āĀ Fiori & UI5 Software Logistics, or: Are We In The Future Yet?
On day 2, Iām running a 2 hour hands-on workshop āLearn to Drive Fiori Apps from Underneath and Level Up!ā. This should be a lot of fun, and revolves around mastering the perfect storm of Chromeās Developer Tools and the UI5 toolkit and support mechanisms.
As well as the summitās website itself, you can read more about the event on Thomas Jungās post on SCN: āComing Down Under ā see you at SAP Architect and Developer Summit in Sydneyā.
Iām really excited to be attending, not least because Iām finally going to meet some of my heroes from the Australia &Ā NZ SAP developer world. Sydney here I come!
]]>Another week, another set of Fiori links. Letās get to it!
Fiori App Reference Library app, via Luis Felipe Lanz Well it was bound to happen, and Iām celebrating that. Luis tweeted a link to a lovely Fiori app, the Fiori App Reference Library, which contains details on the 300+ Fiori apps so far. Of course, the original meta Fiori app, the SAP Fiori App Analysis applicationĀ (which I mentioned in TWIF 2014-31) is still going strong ā find out more about this in this 5 min video āThe SAP Fiori App Analysis applicationā.
But what about this new app from SAP (rather than from me)? Well, there are a couple of parts of the URL (https://boma0d717969.hana.ondemand.com/sap/fix/externalViewer/) that suggest to me that itās possibly temporary, or still in development (there are some active debugger statements in there too), but apart from that, itās a fine example of a classic Fiori app and uses a 1.24 runtime of UI5. Iām tempted to dig in right now and start exploring how itās put together, but Iāll leave that for another time. Iāll just point out that the data it uses is from a proper OData service which is in itself more useful than you might think ā an official machine-readable detailed list of Fiori apps from SAP. Let a thousand consumer apps bloom!
SAP CodeJam on RDE at Ciber NL organised by Wim Snoep SAP River RDE, or to give it its new name SAP Web IDE (hopefully it wonāt change again :-) is an important topic to understand in the world of Fiori. Itās what many developers (although not all) will be using to manage Fiori apps from a creation and extension point of view. RDE has been a long time in gestation but todayās incarnation is very accomplished and those looking to understand what SAPās approach to software management in the Fiori age is, need to spend some time investigating this.
One of the āDutch SAP Mafiaā members Wim organised an SAP CodeJam on RDE which looked to be a great success. The developer ecosystem is not just about the languages (say, JavaScript) and frameworks[^n] (UI5) but also about the tools and environments within which one works. So this CodeJam was ideally suited to learning more about SAPās environment. The day saw developers build Fiori applications in RDE, and I was happy to see that our TechEd hands-on session contentĀ CD168 Building SAP Fiori-like UIs with SAPUI5Ā ā which was created for last yearās SAP TechEd events but has seen action ever since ā was put to good use for this event too.
[^n]:actually one should refer to UI5 as a toolkit rather than a framework, for reasons too long and detailed to go into here :-)
Introduction to SAP Fiori UX by SAP Iāve written about this course from Open SAP before, most recently in the previous TWIF episode TWIF 2014-39. Well, I thought Iād give a quick update on my perspective ā¦ to say that Iāve abandoned the course. Fiori is a huge topic, and one canāt expect a single course to cover everything. But I did expect some UX content, as itās an incredibly important aspect of the Fiori experience. Unfortunately I didnāt find any, and I noted that I wasnāt alone in this regard either.
With the combination of this issue and the as yet unresolved issues from Week 2, I decided that give up on the course and Iād devote the time Iād allocated for study to other more UX/UI related matters, in particular by studying further the SAP Fiori Design GuidelinesĀ that I wrote about in TWIF 2014-28Ā along with details of the latest responsive controls in the UI5 toolkit.Ā Whether youāre following the Open SAP course or not, Iād encourage you to do the same, too.
I must say that Iāve not given up on Open SAP as a whole ā in fact Iām eagerly awaiting the next Fiori related course ā¦ now that Fiori installation and configuration is out of the way with this first course, it could be full steam ahead for the UX part!
Update 20 Oct 2014: Since this post, there has been some discussion internally, on various email threads and also publically here and on Twitter. And todayĀ SAP posted āHelp Shape the Next SAP Fiori Courseā which acknowledges the issues with the lack of UX content and solicits input to determine the content for the next course. Well done Open SAP! This is a conversation in action. Iād encourage you to go over to the survey and add your thoughts.
Until next time, share & enjoy!
]]>Transactional Fiori App Certification by Chiranjivi R D I touched on certification of Fiori apps in an earlier TWIF episode 2014-31 where I pointed to a Partner Co-Innovation Workshop that mentioned certification of Fiori apps developed therein. Certification, at least to me, is not automatically a good thing. Iām strongly ambivalent (if thatās possible) on certification generally, of consultants specifically, and of apps particularly.
This week, this article on Fiori app certification was brought to my attention by friend and fellow SAP Mentor Tobias Trapp. Itās all about the certification of transactional Fiori apps built by partners. With Fiori, thereās great emphasis on the UX principles, and rightly so. There are also of course also the Gateway and Business Suite add-ons too, but for me the primary goal for certification in this area must be how the Fiori app works from a user experience point of view. My general certification ambivalence is then given a run for its money here; I for one do think that without some kind of standards enforcement, the Fiori approach may be diluted. Iāve seen apps that are purportedly āFioriā but just donāt feel right.
Only time will tell. What is your experience of custom Fiori apps? Have you seen Fiori apps that, well, arenāt?
User Experience Sessions at TechEd: SAP Screen Personas, Fiori, UX Strategy, Design Services by Peter Spielvogel SAP TechEd && d-code, arguably the most important event in SAPās annual calendar, is fast approaching. Already, the Las Vegas edition ā¦ which I like to call the āwarm up before the main European eventā :-) ā¦ is less than a month away. I noted the Fiori related sessions in a previous TWIF episode 2014-35Ā and just this week Peter Spielvogel from SAP writes this post detailing some of them. Ironically, he does this in the SAPGUI area on the SAP Community Network (SCN).
I pointed out in TWIF 2014-35 that there didnāt appear to be enough Fiori related sessions (although some folks on Twitter are complaining that all they hear about in relation to TechEd is Fiori and HANA, cāest la vie) but Iām hopeful that there will be at least some coverage in the āhallway trackā and in the Code Jams and hands-on activities that run throughout the week.
In particular, Iād encourage you to look out for the SAP Web IDE stuff. This is the new name for SAP River RDE, which has also some history in the Web Application ToolkiT (WATT) and prior to that the SAP App Designer. What ancestry already! While some of us like to build Fiori apps from the ground up (coding view elements directly in XML, with our UI5 stickers adorning our laptops) there are a great number of people who need guidance. Guidance in both forms ā technical, and design (see the certification piece earlier). And for these folks, and those looking for the right tools to extend existing SAP Fiori apps, the SAP Web IDE is something not to miss.
Introduction to SAP Fiori UX ā an update I wrote about this course back at the beginning of August. Today, along with many thousands of co-participants, Iām well underway with the course materials, into Week 3. For those of you not taking part, here are the the topics covered:
Despite these topic titles, I must admit to having expected a little more on the āUXā part of the title. So far, I donāt remember seeing any real Fiori screen, much less an analysis of how and why it might have been designed that way, and certainly nothing about what lies underneath (the controls in the UI5 toolkit). But itās still relatively early days, and I havenāt given up hope.
One thing Iām also not giving up hope on is the approach Open SAP will have to rectifying incorrect ācorrectā answers to questions in the weekly assignments. For those of you on the course (and therefore with access to the discussion areas), hereās an example of where a question was asked, with the officially correct answer actually being incorrect. (There are other instances of this happening on the course too, but I think those are down to oversights rather than anything else.)
The answer in question, so to speak, related to the deployment steps for frontend and backend Fiori components, and whether they were the same. Of course, with the variations on system landscapes, ABAP and HANA stacks, and even the deployment tools themselves, the answer is ānoā. But this has been marked as incorrect by Open SAP. While in the grand scheme of things this hardly matters, to those taking the course, itās both a matter of principle and an area that one would feel strongly about, being the type of person taking the course, i.e. one that enjoys exacting detail.
Iām sure that the Open SAP folks will sort this out before the course is over.
Before leaving this subject, I would also like to point out that the course content has been rather dry so far. For example, this weekās lectures entail the long winded description of configuration (especially in the area of role assignments in PFCG), only backed up by static slides. Unless I missed it, I didnāt see any actual real live screencasts of configuration in action. I donāt know about you, but I can only take so many slides with theory on them, I need to see things in action. As one of my favourite TV characters likes to say, ālet the dog see the rabbitā!
]]>Then yesterday (Sat 20 Sep) there was SAP CodeJam Liverpool, organised by Gareth Ryan. It was a UI5-themed day where I was totally honoured to work with Frederic Berg (one of the many UI5 heroes from Walldorf) taking the participants on an all-day introduction to building apps with UI5. We took a Fiori design led approach with the exercises and I would say that by the end of the day all the attendees had gained a good appreciation for UI5 and a decent understanding of the development approach. It was a lot of fun and very rewarding; not least because a couple of the participants were from the non-SAP developer ecosphere. Developer outreach, albeit small, in action!
Perhaps itās worth pointing out again that SAP Fiori is powered by UI5. To properly understand SAP Fiori from a developer perspective, UI5 is an essential skill to have.
Anyway, on to this weekās picks.
How to launch āWeb Dynpro ABAPā and āSAP GUI for HTMLā Application Types from the SAP Fiori Launchpad by Jennifer Cha Iāve talked about the SAP Launchpad becoming the new portal a number of times in this TWIF series, but if you need more convincing, take a look at this step by step guide. SAP Fiori Launchpad started out live (in its previous āLaunch Pageā incarnation) as an initial access point to the Wave 1 ESS/MSS Fiori apps.
A lot has changed since then, not least the HTML5 architecture that powers it. But more importantly, the ability to make more available through this initial access point is increasing. SAP Fiori, part of SAPās āNew, Renew, Enableā strategy[^n], specifically the āRenewā part, is not going to cover the entire functional breadth of, say, your ECC system. So having the ability to expose more traditional transactions in the same context as the next generation approach makes some sense, even if it does, in my mind, dilute the purity of design :-)
[^n]:actually this strategy now has a fourth strand āDesign Servicesā. More on that another time, perhaps.
SAP Fiori and Google Analytics by Craig Gutjahr The integration of Google Analytics and web apps is nothing new of course. But this short screencast is a nice reminder of whatās possible. The ability to track activity on a user basis, even on a page basis, is extremely valuable. Combine the detail that Google Analytics gives you, with the ability to explicitly send details on page views from your Fiori app (on a certain event in UI5, such as a navigation) and use that information for the next iteration of your app, focusing on roles and task-based activities, and you can build yourselfĀ a nice UX feedback loop.
By the way, thereās a nice example of sending explicit events to Google Analytics on Joseph Adamsās post āOptimizing page timings for Google Analyticsā.
The Power of Design Thinking in Fiori Application Development by Sarah Lottman This is a good short piece on, well, basically, talking to the user to work out what they need. Iām still not sure what design thinking is, over and above putting yourself in the users shoes and working out what they want, before developing stuff. Of course, this is very glib of me and I may have missed the mark, and the design process that Sarah describes is neither easy nor obvious. I myself am guilty of building software and then imposing that upon others, without having talked to them.
So perhaps the key takeaway is actually that one way to get design right is to use the building blocks that Sarah describes ā persona creation, process and task flow mapping, and wireframing. Actually itās often fun to skip wireframing and jump straight to throwing UI5 control declarations into an XML view structure and throwing it at the screen. Or is that just me?
Jobs, Jobs, Jobs by Various It was going to happen sooner or later. Actually it already started a while ago, but these days Iām noticing more and more job postings. Postings mentioning Fiori specifically, and postings mentioning UI5 specifically, in the title. Thing is, with Fiori and UI5 being relative new skills on the scene, thereās room for even more confusion than normal in this area.
]
Take a recent post on Twitter, advertising a position thus: āArchitect ā Mobile Web & Fiori Jobā at SAP in Bangalore (according to the link destination).
But reading the copy, the only mention of the word āFioriā in the whole detail was in the title. Nowhere in the actual description. And the only mention of UI5 at all was as the last item in a list, almost an afterthought: ā(JQM, Sencha, SAP UI5, etc)ā.
I donāt understand whatās going on here. So I guess we have to just keep an eye on the details of what is actually being offered. And on that subject, keep an eye on the details on the other side of the fence too. In a previous episode (TWIF 2014-29) I noted seeing a claim of āfive years plus of SAP Fiori focused deliveryā. Remember, Fiori has existed for less than two years, UI5 for a bit more than that. Caveat, well, everyone.
Thatās it for this week, thanks for reading, see you soon!
]]>One of the problems I have is that when Iām looking for an icon, the search term I have in my head is not necessarily going to match up with the name of the icon in the library.
For example, I might be looking for a ācogā, with the icon on the left in mind, but Iām not going to be able find it unless i use the term āaction-settingsā.
And in the light of the session I gave this weekend at SAP Inside Track Sheffield on āQuick & Easy Apps with UI5ā³, where I focused on single-file apps, albeit with full MVC, I decided to hack together a little smartphone-focused app where I could search for icons, and add my own āaliasesā so that next time I searched, the search would look at my aliases too.
Itās a very simple affair, and in this first version, is designed to use the localStorage mechanism in modern browsers so that you build up your own set of aliases. Perhaps a future version might share aliases across different users, so that we can crowdsource and end up with the most useful custom search terms.
Anyway, itās available currently at http://pipetree.com/ui5/projects/iconfinder/ and you can grab the sources from the Github repoĀ (remembering that the whole point of this is itās a single-file app!).
Hereās a short screencast of a version of it in action:
Let me know what you think ā is it useful? In any case, share & enjoy!
]]>When you write a series of weekly posts, youāre acutely aware of how fast the actual weeks fly by. And this last one was no exception. Lots of movement and activity in the SAP Fiori world ā¦ letās get to it.
Introducing the New SuccessFactors UX based on SAP Fiori by Sam Yen This short video from Sam Yen, SAPās Chief Design Officer, is worth watching, not least for the soundbites that help underline how important SAP Fiori is for SAP, and therefore for us as customers and partners. Here are a couple of them:
āDesign has been named one of the five priorities of the entire companyā
āFiori is now the design direction for all of SAPās solutionsā
Clearly building the new SuccessFactors complete user experience upon SAP Fiori is a significant next step in this direction. Even if youāre not interested in any of the current SAP Fiori apps, be interested in SAP Fiori as a UX and UI technology. Not being interested is to miss out on one of the critical new generation platforms for enterprise apps in the SAP ecosphere.
Take Part in the SAP Fiori UX Design Challenge by Susanne Busemann Tomorrow sees the start of the OpenSAP course which I first mentioned in TWIF episode 2014-31Ā ā the Introduction to SAP Fiori UX. As an optional part of this course, a design challenge has been set.
If you donāt know already, a large part of the philosophy behind SAP Fiori is about the UX, as distinct from the UI. The UX you get from SAP Fiori is powered by the UI that is provided by the tremendously capable UI5 toolkit (see The Essentials ā SAPUI5, OpenUI5 and Fiori for more details on UI5 and its relationship with Fiori).
Even as a out and out developer, and primarily (or at least originally) a backend developer ā a ādata plumberā, I have found in my UI5 and Fiori development experience so far that prototyping the user experience is a important part of building great apps. So Iām happy to see that folks are encouraged to dip their toes in the design pool.
See you on the course!
My Personal Ux, Fiori, Portal, Cloud Cheat Sheet by Aviad Rivlin My friend and fellow SAP Mentor Aviad has appeared on TWIF before, specifically in TWIF 2014-30, talking about the SAP Fiori Launchpad and the SAP Portal of course. This time heās back, with a nice little set of links to great resources relating to Fiori, Portal and the cloud. Itās a super combination and not a little fascinating, for reasons Iāve mentioned before ā thereās a convergence of SAP Fioriās Launchpad with the older SAP Portal concepts, which is not unexpected as both serve similar functions.
Aviad intends to update the blog posts with new links as and when appropriate, so itās definitely worth bookmarking.
SAP Fiori Application Integration with SAP Enterprise Portal by Ido Fishler On the subject of SAP Fiori and SAP Portal, hereās another timely blog post on the SAP Community Network by Ido Fishler. He takes the reader through the steps required to get an SAP Fiori app integrated (via iView) into the SAP Enterprise Portal. Whether youāre running an SAP Portal or not, itās definitely worth a read ā the āexhaust-knowledgeā alone is worth the price of a coffee for sure.
Well thatās all for now, folks. Iām off to document episode four of a rather exciting series Iām building on the subject of OpenUI5. Until next time, share and enjoy!
]]>Hello and welcome to another episode in This Week in Fiori (TWIF) ā for week 35, the last week in August already. This week itās an all-SAP affair. Without further ado, letās get to it.
Catalog of SAP Fiori Apps by SAP This has recently started to appear on peopleās radar, and is a nice resource for summarising all the apps available so far. There are a lot of apps listed, and according to a rough calculationĀ it looks like 370 apps are now listed.
I guess one issue with this catalog page is that it doesnāt really scale, from a human readable perspective, and you donāt get a feel for where the majority of the apps lie. For that, Iād of course recommend my SAP Fiori App Analysis Tool that I mentioned in a previous TWIF episode (TWIF 2014-31). This tool lists the apps that were available at the time the tool was built (313 of them), and I need to get round to add the new apps to the database. Of course, perhaps if I found a few of the right shaped tuits I might attempt to parse the source of this Catalog page. Ideally, SAP would supply a machine readable dataset. Please?
Hereās my rough calculation, by the way :-)
SAP Fiori Subtrack at SAP TechEd & d-code by SAP The SAP TechEd conference season is starting soon and the excitement is building already. This year thereās a User Experience & User Interface Development track[^n]. Within this track thereās anĀ SAP Fiori subtrack, which is great to see (although not unexpected!). Hereās a quick glance of the sessions in this subtrack in Berlin:
Mini CodeJams, Code Reviews, Lectures and Hands-on Workshops. There are not as many as Iād like, but itās a good start. Perhaps Iāll see you there?
[^n]:Ironically the SAP TechEd && d-code site makes it very difficult for me as a user to use ā following links within the Agenda Builder break fundamental browsing contracts and expectations, such as being unable to go back having selected a track or subtrack, for example. Bad UX at its best.
**Use Cases for Extending the UI of SAP Fiori Apps by Clement Selvaraj **One of the better (read: more comprehensive) PDF based documents to come out over the past few months, that has only just come to my attention, is this PDF-based detailed document on extending SAP Fiori apps. It takes a specific functional scenario (Report Quality Issue) and walks the reader through a series of extension use cases. These use cases cover the extension concepts (extension points and controller hooks), and as a nice by-product, give the reader insight into a little bit of how a real SAP Fiori app is put together under the covers. For example, it highlights the Sn views (S2.view.xml, S3.view.xml, and so on) which my colleague Brenton and I covered in our Understanding SAP Fiori WebinarĀ a couple of months ago. See the accompanying video screencast āUnderstanding SAP Fioriā for more details.
Well thatās it for now, thanks for reading. I hope youāre enjoying this TWIF series ā¦ do please let me know if thereās any way I can make things better, Iād love to hear from you. Until next time, share & enjoy!
]]>Another week gone! Iām sitting in my āsecond living roomā, North Tea Power, drinking a fab coffee and sifting through the Fiori related articles that came to my attention this week. And just this morning there was a very interesting conversation on Twitter that I also want to bring to your attention; not only because it relates to Fiori, but also because it involves some of the key thinkers and doers in this space, folks that I respect greatly. So, letās get to it.
Extensibility information for SAP Fiori by SAP In my 27 years hacking on SAP, Iāve seen the constant struggle between quality and quantity of SAP documentation. I cut my enterprise tech teeth on IBM mainframes ā proprietary tech to the core, but my goodness did they have superb documentation, the quality and preciseness of which Iāve never seen since, to be honest. Iām sure Iām not the only one whoās had a love-hate relationship with SAP documentation, but having recently been on the other side of the fence (involved in producing some documentation recently) I do know itās no easy task.
SAP Fiori is here to stay, as are the underlying tech layers; and we need to be prepared to embrace a new SAP software logistics world that is very different from the old but comfortable ABAP stack based one with which weāre familiar. Software logistics? Code management, version control, deployments, and extensions & enhancements ā¦ not least those modification free ones that allow us to survive service pack updates and the like.
So it is with this in mind that I reviewed what extensibility documentation exists in the SAP Fiori space. While it touches many of the bases, it is still relatively sparse on detail, and still lacking in examples. Still, it is a start, and I encourage you to read it, if nothing else, to discover the areas that you need to know more about ā¦ and persuade SAP to write more on.
Fiori, Personas and beyond: selecting the best UI for SAP processes by Chris Scott This is a nicely considered post on the SAP Community Network that takes a step back from Fiori and encourages the reader to consider all the options for improving the overall user experience (UX). It highlights that there are options other than SAP Fiori of course, but more importantly it suggests, rightly, that the whole approach should be requirements driven, with a focus on improving process. Sure, this sounds obvious, but sometimes itās easy to lose sight of the bigger picture when the tech is so compelling. It also goes some way to underline the basis of SAP Fiori UX strategy ā task / function focused, according to role, rather than the more traditional feature-smorgasbord that weāre used to in the UI that we drive by entering transaction codes.
SAP Fiori Prototyping Kit by SAP In TWIF 2014-28, I highlighted the SAP Fiori Design Guidelines. Bundled with these guidelines was a simple prototyping kit. The very fact that a prototyping kit exists suggests how important the user interface (UI) design process is if you want to produce good UX, and while there are different philosophies related to prototyping, a lowest-common-denominator approach is to mock stuff up with building blocks that represent UI component parts. The prototyping kit has these component parts, and has recently (this month) been updated. Definitely worth a download.
A useful side effect of tools like this is that we stand a better chance of producing appropriate and consistent, compatible āFiori-likeā UIs that donāt jar when switching from one app to the next.
On UI5 and Fiori deployment and extensions by The Usual Suspects on Twitter A very interesting conversation came about on Twitter this morning, with UI5 and Fiori luminaries such as Graham Robinson, John Patterson and Jason Scott. It was about non-standard (i.e. not SAP standard) development workflows, and included thoughts on Fiori development and extensibility.
As with many Twitter conversations, a lot of what was not said ā due to the 140-char nature of the microblogging platform ā was just as important (bringing a modern nuance to āreading between the linesā). My take on the conversation, and the thoughts in the minds of the participants, was that we need to keep a close eye on where SAP is going with tooling, and where we as individual developers want to go, how the paths are similar and how theyāre different. Not everyone wants to use Eclipse, or even RDE, to develop and maintain Fiori applications. RDE ā the River Development Environment ā is of course a fabulous piece of engineering but it should never be a one-size-fits-all solution.
One of the wonderful side effects of SAP embracing open standards and open source is the freedom we have to choose the tools, and build the the tool chains and workflows with those tools, workflows that best suit the particular environment and circumstances of the client and the design / developer teams.Ā I want to make sure we donāt lose sight of that side effect as time goes on.
Well thatās it for this week, until next time, share & enjoy!
]]>Hello again. Another week has passed, and the writing of this weekās TWIF should have found me in the Lake District, but alas due to circumstances too tedious to go into now, finds me about 90 miles south, back at home. Anyway, itās the end of the week and therefore time for some Fiori links and commentary. Letās get to them!
SAP Fiori Launchpad for DevelopersĀ by Steffen Huester and Olivier Keimel In previous TWIF episodes Iāve mentioned the SAP Fiori Launchpad and its importance to the Fiori app ecosphere. Itās slowly becoming the new lightweight portal and rightly so. The SAP Fiori Launchpad has been designed to be cross platform (ABAP, HANA and Cloud stacks) and in true SAP style this design shows through in the form of abstraction layers ā service adapters, the shell renderer and the application container. In fact, itās the application container that might pique your interest, as we see that it can not only host UI5 apps (via the Component concept) but also Web Dynpro ABAP and SAP GUI for HTML apps.
This document, which applies to the User Interface Add-On 1.0 SPS 05 (am I the only one to still refer to this product as āUI2ā³?) is a great resource which explains the Launchpad architecture and includes some details, and doās & donāts, on the Component based approach to building and embedding apps. Yes, embedding ā the Launchpad is a single HTML page (a resource with a URL typically ending āFioriLaunchpad.htmlā) into which UI5 apps, in the form of Components, are loaded.
One thing in this document that made me smile was a couple of references to theĀ UI5 Application Best Practices guideĀ (alsoĀ available in the SDK docu)Ā which is the work of my own hand :-)
**Build me an app that looks just like Fiori by John Patterson **This article only recently came to my attention. It was published a few days ago in Inside SAP but looking at some of the content towards the end (specifically about open sourcing), I think it was written a while ago. Nevertheless itās a good read and worthy of attention now. (Also, randomly, it reminds me of the title of the film āBring Me the Head of Alfredo Garciaā.)
Even now I come across folks who are still looking for a good explanation of Fiori, UI5 and the relationship between them, and also what UI5 offers. Sometimes I point them at my post āThe essentials: SAPUI5, OpenUI5 and Fioriā but also this article by John addresses that need nicely too.
(Warning, you need to complete a free signup to get to the content. Come on Inside SAP, you can do better than that!)
**SAP Fiori Course Offerings by SAP **In TWIF 2014-31Ā I mentioned that the OpenSAP MOOC is offering a free course āIntroduction to SAP Fiori UXā starting in September this year. I thought Iād take a look at what SAP offers in the way of more traditional courses, relating to Fiori. This is what I found on the SAP Fiori curriculum page:
Itās still early days, I think, but itās a fair representation of the skills required for Fiori:
Note that the GW100 course covers OData from a Gateway perspective, i.e. the OData server product mechanism from SAP for the ABAP stack. There doesnāt seem to be coverage for the roughly equivalent OData server mechanism XSODATA on the HANA stack. With many of the SAP Fiori apps, specifically the analytical and factsheet ones*, requiring HANA as a backend, this seems to be a gap that should be filled sooner rather than later.
*See the SAP Fiori App Analysis tool for more details
Whatās New in SAP Fiori (Delivery July 2014) by SAP A nice coffee time read is this series of Whatās New documents from SAP on the main SAP Fiori documentation site. The documents donāt go into too much detail but do have pointers to where more information is available; they nicely summarise some of the new features and changes that are delivered in the ever increasing number of waves.
This time, like last time (for the Delivery May 2014 edition), the Whatās New covers Products, Infrastructure and Documentation. There again we have the significance and prominence of Fiori infrastructure, which of course includes the Launchpad, but also the set of layers between any given Fiori app and your backend SAP system. Worth keeping an eye on for sure.
Well that just about wraps it up for this week. Until next time, share & enjoy!
]]>Here we are, another week into the new Fiori flavoured world, and as always, there are things to talk about and posts to mention. While itās been a relatively quiet week there have still been various āannouncementsā that company X or company Y is now supporting SAP Fiori, or have a Fiori related offering which involves design, prototyping or deployment.
While the glass-half-empty folks might point out that this is a lot of marketing and bandwaggoning, I like to think of it as a good sign that as well as already being everything from a design philosophy (āFioriā) to a product (āSAP Fioriā), itās also gaining traction and mindshare in the wider ecosystem and becoming a definite context for engagement.
Ok, letās get to the pointers for this week.
Build SAP Fiori-like UIs with SAPUI5 by Bertram Ganz While working as a member of the core UI5 team at SAP in Walldorf in 2013/2014, I was privileged to take part in the creation and presentation of SAP TechEd session CD168 āBuilding SAP Fiori-like UIs with SAPUI5ā³ with a number of UI5 heroes like Thomas Marz, Frederic Berg, Bertram Ganz and Oliver Graeff. I wrote about the CD168 session in a post on the SAP Community Network and since the delivery of the session at the SAP TechEd events 2013, the slides, detailed exercise document and exercise solutions have been made available via Bertramās post.
Even though it was posted back in January this year, itās still an important post for a couple of reasons. First, the material is very comprehensive and takes you from a very basic and raw application all the way through to a rather accomplished Fiori application, introducing many features of UI5 that are key to Fiori applications along the way. But also, it shows us that designing and building Fiori applications is not just in SAPās hands ā it can be in your hands too. Fiori is a concept big enough to share.
If you havenāt already, take a look at this content to get a feel for what itās like to build Fiori apps. Itās a pretty decent set of materials, and Iām very proud to be a co-author.
Why Pie Charts are not in SAP Fiori Chart Library by Vincent Monnier Like the reference to The Fiori Design Principles in the first post in this series back in week 27 (TWIF 2014-27), this post by a designer at SAP highlights that as well as development and the thought processes behind building software, thereās also design and the thought processes behind building a great experience ā¦ both of these things go into Fiori.
This is a relatively short post that highlights out some of the general downsides to pie charts and points to some further reading. But itās the fact that the design process has been gone through and also shared with the wider community that is interesting. In fact, if nothing else, use this as a pointer to the whole SAP User Experience Community site. And if you want to know more about charts in SAP Fiori, see the chart section in the SAP Fiori Guidelines.
The UI5 Explored App by the UI5 Team The toolkit on which Fiori apps are built is UI5 (UI5 is the generic term I use for both the SAP licenced version SAPUI5 and the open source licenced version OpenUI5 ā¦ see The Essentials ā SAPUI5, OpenUI5 and Fiori for more info). The UI5 Software Development Kit (SDK) includes a large amount of documentation and example code, and part of that is known as the Explored App. It started out life specifically to showcase and provide example best practice approaches for controls in the responsive āsap.mā library, but has graduated to being a top level menu section within the SDK and covers controls beyond āsap.mā now too.
(As with the CD168 tutorial materials, I am also proud to have had a hand in building the Explored App too ;-)
With the Explored App you can, well, explore many features and functions within UI5, a good number of which are used to build Fiori applications, and youāll start to recognise component parts, building blocks that are used and reused to provide features such as search, lists, buttons, dialogs, and so on. Letās pick one ā the IconTabBar. In context, it typically looks like the lower half of this screenshot:
The IconTabBar is used to contain a number of tabbed sections, with the selection for each of the sections typically being round icons. The design changed slightly between SAP Fiori Wave 4 and 5, now thereās more info shown in place of the icons.
Have a look around and see what Fiori building blocks you can recognise!
Well, the train is almost at Manchester Piccadilly now so this brings this weekās roundup to a close. As always, thanks for reading, and remember you can access the whole series with this TWIF category link: /category/twif/.
Share and enjoy!
]]>Well, yet another week has gone by and we have new Fiori related content to consume. And I was reminded of that early this morning after seeing a tweet and a screenshot from Tony de Thomasis showing SAP Fiori for TDMS 4.0 ā the scope of SAP Fiori apps is indeed widening further. The tweet prompted me to think about reviewing the data for my online SAP Fiori App Analysis tool** with a view to updating it. Do you find it useful? Let me know in the comments or via Twitter (Iām @qmacro).
**the data is hand-gathered, see The SAP Fiori App Analysis application for some background. Ideally SAP could make this data available and keep it up to date for us, right?
Anyway, on to the picks for this week.
OpenSAPās Introduction to SAP Fiori UX, by Prakalp Phadnis, Elizabeth Thorburn & Jamie Cawley Well, that didnāt take long! SAPās extremely popular and successful Massive Open Online Courses (MOOC) system āOpenSAPā is offering a free course on SAP Fiori. Specifically, Fiori User Experience (UX). After all, UX is at the heart of a lot of what the Fiori philosophy is about.
Iāve said in the past that Fiori is āmany things, including a state of mindā. Iām hoping that this course, which promises lessons on fundamentals, latest features, installation, configuration and best practices for extensibility, will instill in the attendee a sense of what good looks like, and help to prevent possible dilution of the Fiori concepts.
The SAP Fiori Launchpad has been added to the UX Explorer! by Elizabeth Thorburn In last weekās TWIF installment I mentioned the functional proximities of the SAP Fiori Launchpad and the SAP Portal, in reference to a post by Aviad Rivlin. This week SAP have taken another step towards surfacing info about the important Launchpad, by including it in the UX Explorer.
With the UX Explorer you can find out about different User Interface (UI) and UX products and technologies from SAP. While the current content for the Launchpad isnāt overwhelming, it is there, which is a start. And thereās a couple of things that stood out for me: It stated loud and clear that the Launchpad was built using SAPUI5 (yay for the teams and my extended family in Walldorf!) and is most definitely marked as āstrategicā as relating to relevance for SAPās own application development.
Elizabeth is one of the tutors on the Introduction to SAP Fiori UX course, by the way.
Partner Co-Innovation Workshop ā Build Your Own Fiori App by Jeffrey DāSilva For me this post is a bittersweet one. The SAP Co-Innovation labs are running a 3-day workshop for partners, covering design thinking, Fiori design principles, UI5 controls and more, culminating in the attendees building an app. Itās not clear to me after reading the agenda and the description whether the app will be a mockup only (as detailed in the agenda) or complete and fully certified (as detailed in the description). My guess is that with two of the three days taken up with design (and rightly so), the result will be nearer a working mockup than something that has already reached SAP certification.
But hereās the thing: SAP Fiori and the underlying technologies (UI5 and OData) are the fundamental building blocks of much of SAPās application future. So itās not only important for SAP themselves and SAP partners, but for SAP customers too. What customers were (and still are) building and extending in the land of classic and web dynpro, the approaches and techniques used, and the tools and platforms relied upon, will be slowly but surely be superseded by Fiori, UI5 and OData flavoured equivalents.
Customers, ready thyself for Fiori flavoured development! And by that, I mean a different approach to source code control, version management, extensibilty, and more, as well as design and build techniques and libraries.
Thatās it for this week, have a great weekend, and as always, share & enjoy!
]]>Oracle Ships Nearly 60 Mobile Apps for JD Edwards by Chris Kanaracus Whatās interesting about this news is that there are many parallels with the SAP Fiori initiative. The apps that Oracle has released are free, and theyāre task focused. One of the underlying design principles of Fiori is that the apps are task based ā a person with a given role needs to perform a specific task. This not only makes the apps simpler, but it makes them more appropriate for mobile use, where often the available focus time is shorter than when youāre sitting in an office.Ā And of course, after the pressure from customers, SAP Fiori apps are free too.
Finally, depending on your perspective, the fact that these apps are available in app stores is either a net positive or negative. For me, the appeal of Fiori is that itās (a) cross-platform/device, rather than restricted to mobile devices and (b) hackable. This latter feature is why SAP applications, in my opinion, have been so successful in incarnations going right back to R/2, where I started ā the source code is available to copy or modify.
**SAP Enterprise Portal 7.4 SP7 ā SAP Fiori Launchpad on the SAP Portal and more by Aviad Rivlin **Itās no secret that the SAP Fiori Launchpad and the SAP Portal both operate in a similar space ā high level consolidated access to functions and applications in SAP backend systems. Thereās a some confusion over SAPās strategy in this area, and a lot of questions exist. From my perspective, the two initiatives are converging, from both technical and functional points of view. This post goes some way to help further clarify, or at least give some background to SAPās attempt at aligning the user experience of both Portal and Fiori Launchpad.
More Fiori! New Updates to SAP Fiori Rapid Deployment Solutions by Bob Caswell In TWIF 2014-28Ā I wrote about the Rapid Deployment Solutions (RDS) that SAP brought out earlier this year in the Fiori arena. This week thereās an update to the solutions that SAP offer, with more apps covered, a greater emphasis on user experience adoption, and perhaps most significantly for me, an added focus on Gateway. SAP Fiori apps are nothing without OData, and for the ABAP stack, the SAP Gateway product is essential.
OpenUI5 MultiComboBox First Look by me Remembering that SAP Fiori apps are built with OData on the backend and with UI5 on the frontend, I thought Iād end this weekās TWIF with a link to a short (12min) video that explores a specific UI5 control from the sap.m library.
Just before OSCON, version 1.22 of OpenUI5 was released. This was a huge release with many new features.Ā OpenUI5 is the Open Source version of SAPUI5 upon which SAP Fiori apps are built, of course. And specifically SAP Fiori apps, being responsive by design, are built with controls from the UI5 library that contains the responsive controls, namely sap.m.Ā This library gained a number of new controls in the 1.22 release, and this video explores just one of them ā the sap.m.MultiComboBox control. Even if youāre non-technical, this video will hopefully give you an insight into the small but perfectly formed building blocks of SAP Fiori apps.
Well that just about wraps it up for this week. Until next time, share and enjoy!
]]>and I added a key to this root array so it looked like this:
I did this programmatically in the requestCompleted event of the model mechanism, as you can see in the Gist for the MultiComboBox.html file, specifically starting at line 38:
oModel.attachEventOnce('requestCompleted', function(oEvent) {
var oModel = oEvent.getSource();
oModel.setData({
"ProductCategories" : oModel.getData()
});
});
However, while fun and interesting, I want to point out that this is not absolutely necessary. The model will still support an unkeyed root element such as this array, as shown in the first screenshot above. You can see how this is done in the [Gist for the MultiComboBox-without-Keyed-Root.html file](https://gist.github.com/qmacro/973aea751b00654b399a#file-multicombobox-without-keyed-root-html)Ā ā the difference is we donāt need to manipulate the data in the requestCompleted event and the binding for the MultiComboBox items aggregation looks like this:
{/}
rather than this:
{/ProductCategories}
Of course, having an unkeyed root element means that you canāt have anything else in that JSON source, which may cause you issues further down the line. But itās not critical for this example.
]]>This year SAP is an OSCON Gold Sponsor and there are a number of sessions that are related to that. One of these was a 3.5 hour tutorial on OpenUI5
Discover OpenUI5 ā The New Web UI Library from SAP
We (Andreas Kunz, Frederic Berg and me) presented this tutorial which was based on an updated version of some work we and other UI5 team members had previously prepared for SAP TechEd. It was a lot of fun, and hopefully, educational for the attendees.
Of course, being Open Source related, weāve made the session material (slides), comprehensive exercise document, the starter project and all the solutions to the exercises available. We collaborated on a Github repo, and itās all there:
https://github.com/BluefinSolutions/OpenUI5-OSCON-2014
So have at it, see how you get on, and spread the OpenUI5 love.
Share & enjoy!
]]>Iām currently writing this episode of This Week in Fiori (TWIF) on a flight from Manchester via Philadelphia to Portland for OāReillyās Open Source Convention OSCON. Itās a super conference on all things Open Source and I can heartily recommend it.
Back in 2001, 2002 and 2003 I attended OSCON and spoke on the subject of SAP and Open Source. 2014 has come round and Iām back, this time on the subject of OpenUI5, the Open Sourced version of SAPās UI5 toolkit. Along with a couple of friends & SAP colleagues Andreas Kunz and Frederic Berg, weāre giving a tutorial on the subject:Ā Discover OpenUI5 ā The New Web UI Library from SAP,Ā as well as a presentation.
So Iād like to start the week by giving a couple of pointers to background material (āUI5 Creditsā and āThe Essentialsā), to help you get a good idea of the Open Source software upon which UI5 is built. And of course, itās upon UI5 that SAP Fiori apps are built.
UI5 Credits by the UI5 team This part of the UI5 Software Development Kit (SDK) lists the libraries, toolkits and other software in the Open Source domain that are used to power parts of UI5.
The Essentials: SAPUI5, OpenUI5 and Fiori by me If youāre interested in finding out more about the relationships between SAPUI5, OpenUI5 and Fiori, this short post should clear things up.
Updated Version of SAP Fiori Client by John Wargo The SAP Fiori Client is a hybrid app for specific mobile devices (such as those running Android and iOS), designed specifically to run SAP Fiori apps. Built using Cordova (PhoneGap), itās a hybrid app in that it is an OS-native install, but is effectively a shell around a browser core, which then acts as the runtime SAP Fiori as usual.
The SAP Fiori Client was designed with performance in mind; amongst other things; for example, it caches the runtime to reduce startup costs. Since the initial release thereās been an update, described in this post. The update contains bug fixes and relatively minor new functionality, but itās a good sign that maintenance is ongoing. The SAP Fiori Client is definitely worth a look.
The SAP Fiori Fit: Part 1 ā Your Fiori Strategy by Molly Maple This is a nicely balanced piece in the SAP Mobile section of the SAP Community Network site. It talks about what SAP Fiori is (a āUX toolkitā) and what it isnāt (a āmobile platformā). It talks about the orthogonal styles of application delivery: Function-oriented (found in the traditional ādynpro-styleā apps) and task-oriented (exemplified by the SAP Fiori apps themselves). And it covers some of the current benefits and shortcomings of Fiori when compared to the SAP Mobile Platform.
HR Renewal & SAP Fiori Q&A Transcript by Jeremy Masters SAPInsider ran a recent Q&A session focused on HR Renewal, Employee Self Service / Manager Self Service (ESS/MSS) and SAP Fiori. Being a chat-based Q&A the questions and answers are all available. Folks asked about the ease of implementation, about the relationship with, and future demise of WebDynpro, and of course the Portal conundrum, made more interesting by the arrival of SAP Fioriās Launchpad. Reading this Q&A gives you a good insight into what your peers are really thinking.
Of course, I have to take some slight exception to one of Jeremyās answers regarding a reference to āweb servicesā and Gateway :-) Ā Yes, OData has the concept of a service document, and itās on the web (HTTP) but the specific phrase āweb servicesā conjures up something altogether more complex and heavyweight (and less RESTful).
Well that just about wraps it up for this week. And while Iām thousands of feet over the Atlantic, currently somewhere due south of Iceland, I wanted to leave you with an observation: It seems that each week, new companies and offerings are appearing in the SAP Fiori arena. Webinars (yep, we hosted a webinar on Understanding SAP Fiori last month), demonstrations, Q&A sessions, fixed price implementation services and offers of free prototyping.
The best I saw this week was a statement from an SAP technology consulting company in the US, where the SAP Fiori practice lead claimed to have āfive years plus of SAP Fiori focused deliveryā. Seeing as SAP Fiori has been around for less than two years, thatās quite impressive! :-)
]]>Already a week has passed since my first post in this series and the Fiori related content is increasing. A lot of that is technical, as folks get to grips with the configuration and development mechanisms that underpin Fiori. Perhaps Iāll have a technical āThis Week in Fioriā (TWIF) post next time, but for now, here are some more articles, along with some observations.
**SAP Fiori Brings Out Four Tools To Improve User Experience by Steve Anderson **The thing that struck me about this article is that the tools that Steve writes about ā rapid deployment solutions, proof of concept services, and design thinking ā implicitly underline the fact that User Experience (UX) has really arrived in the SAP world of enterprise software. UX has stopped just being a natural by-product of application design, as it might be when dynpro-oriented applications are built with a transactional focus; itās now an explicit and important part of the overall process.
**SAP Fiori UX ā Apps Overview with Screenshots by Oliver Lehmann **This is a link to a great PDF-based resource containing details of the current SAP Fiori applications, of which there are over 300 (313 to be precise ā see the āWebinar & More: Understanding SAP Fioriā link below). With the organisation by Line of Business (LoB) category, and role, and plenty of screenshots, itās extremely useful as a visual reference, especially if you havenāt seen may of the SAP Fiori apps in action yet.
**SAP Fiori Design Guidelines **Talking of great resources, one not to miss is this set of (beta) design guidelines for Fiori from SAP. I spent 6 months working as a member of the core UI5 team at SAP Walldorf in 2013/2014 and in my time there I really got to appreciate the tremendous passion, the effort and the attention to detail that the design and development teams have and exhibit on a daily basis. A lot of this detail, essential in making the SAP Fiori UX what it is today, has been collated and made available in a very easy to follow guidelines. As we move from āSAP Fioriā to āFioriā and start to build our own apps, these guidelines will play an important role.
Webinar & More: Understanding SAP Fiori by me A few weeks ago, Brenton OāCallaghan and I hosted a public Bluefin Solutions webinar āUnderstanding SAP FioriāĀ which was very well attended and fun to do. I wrote up some details in a followup post here, which you may find interesting. In particular, Iād like to draw your attention to a couple of things: thereās the SAP Fiori App Analysis tool that I wrote (itself a Fiori style app) which helps you explore the details of the currently available SAP Fiori apps, all 313 of them; itās accompanied by a short explanatory video too. Then thereās all the stuff that Brenton and I didnāt manage to cover, in particular a deep dive into some of the details of an SAP Fiori applicationās architecture. We recorded this as a sort of āDirectorās Cutā video āUnderstanding SAP Fioriā as a follow on to the webinar itself.
So thatās it for this week, until next time ā share & enjoy!
]]>As you may well have heard, SAP announced earlier last month at Sapphire that, along with SAP Personas, SAP Fiori is "now included" within the underlying licences for SAP software.
This is a significant milestone in both SAP's openness to customer & partner concerns and also in its drive to renew, nay overhaul, the user experience (UX) for its business software. The significance did not go underappreciated, especially as our very own John Appleby, was a key participant in the conversations to free SAP Fiori.
Our webinar "Understanding SAP Fiori" covered, in equal parts:
SAP Fiori was released around this time last year, with 25 Employee Self Service / Manager Self Service (ESS/MSS) applications in Wave 1. Since then a number of Waves have been delivered along with improvements to the general UI infrastructure that supports them, most significantly the move to the SAP Launchpad (which is also converging with SAP Enterprise Portal technology).
There are now over 310 applications covering the three core SAP Fiori application archetypes - Transactional, Analytical and Factsheet.
Note that only Transactional applications can be powered by non-HANA database platforms; the Analytical and Factsheet applications require SAP HANA.
There's a growing coverage of applications for various sectors of the SAP business application spectrum. To take the Enterprise Resource Planning (ERP) sector as an example, there are applications for Financials, Travel Management, Retail, Production Planning & Control, Project System, Materials Management, Sales & Distribution, Logistics Execution, Quality Management, Plant Maintenance, Global Trade Management, Human Capital Management, Insurance and a number of new SAP Smart Business applications.
To explore this information and more, you might be interested to try out the SAP Fiori App Analysis application, something simple that I built to help prepare for the webinar. It is an SAP Fiori style app itself which allows you to explore the SAP Fiori application offerings; a relationship which will perhaps bring a smile to the faces of those fans of Douglas Hoftstadter and his writings about "meta".
The architecture for SAP Fiori is nothing brand new, at least not in significant areas. SAP Fiori is, at its barest essentials, the combination of SAPUI5 and OData (via Gateway). But because SAP Fiori and everything that it embodies -- from design patterns, to development principles, the use of HTML5 for client side execution and a unified API for backend (read-write) consumption -- is essentially a significant part of SAP's future application development direction, and arguably much greater than the sum of its parts, it's essential that we as customers and partners understand how Fiori ticks.
Note that I said "Fiori" and not "SAP Fiori", because we can and should develop Fiori applications too. We can already extended and enhance existing SAP Fiori applications; the next logical step (one that some of us have taken already) is to build our own. Fiori is not just a pretty sticking plaster over SAP's core, it is a model of how applications could and should be developed in the future. Our ABAP-based skills are not side-lined, indeed quite the contrary: Aside from the more obvious point that OData services powered by Gateway are written in ABAP, the software logistics, standards, processes and procedures that have been refined over the years apply equally to the application lifecycles in the new Fiori context.
Deep dive into SAP Fiori application architecture An hour isn't long enough to cover everything we wanted to say in this initial SAP Fiori webinar. So Brenton and I sat down the next day, in the Smallest Office In The World (reminiscent of the office that Sam Lowry is given in the classic film Brazil) and recorded a "Director's Cut" extension to the webinar itself (which we didn't record).
This was a deep dive into SAP Fiori application architecture, and covers lots of low level detail on how an SAP Fiori application ticks. We look into the general architecture of applications and focus specifically on one, the Approve Purchase Contracts transactional application. Grab a coffee and a biscuit and let us guide you through the new world; even if you're predominantly functionally focused, you will still get an understanding of the patterns and approaches that the SAP developer teams have taken for our application future.
If you attended the webinar (it was well attended!), I hope you enjoyed it and found it useful. If you didn't, I hope at least that this post gave you some insight, and I'd encourage you to watch the deep dive video and explore the SAP Fiori application offerings with the SAP Fiori App Analysis application.
Until then, share and enjoy!
The interest in SAP Fiori and the User Experience (UX) renewal at SAP is growing week on week. Ever since the launch of SAP Fiori Wave 1 back in summer 2013, with 25 Employee Self Service / Manager Self Service (ESS/MSS) apps, the momentum has been growing. Not surprising, given these things:
Moreover, with the announcement at Sapphire 2014 in Orlando this year that SAP Fiori, along with SAP Personas, are now included in the existing licence and no extra fees are applicable, that interest has changed gear completely. As a result, there are plenty of articles to read; I thought Iād share my top picks of articles and posts that are doing the rounds right now.
How SAP is Reinventing the User ExperienceĀ by Sam Yen. This is a Q&A style interview with Sam that was done a couple of months back, but itās a must-read not only given the recent Sapphire announcements, but also because it underlines the clarity of statement for SAPās UX and User Interface (UI) direction. Regarding strategy, Sam states: āWith SAP Fiori, weāre able to say āThis is the future direction of the SAP experienceā. All SAP solutions are going to be converging in this directionā. This nicely echoes a piece I wrote in 2012 ā āSAPUI5 ā The Future Direction of SAP UI Development?ā ā¦ around a year before SAP Fiori was announced. It was clear from the state and potential of the UI5 toolkit even back then that the HTML5-based outside-in UI paradigm at SAP was here to stay.
Becoming Simple takes focus ā now Fiori and Personas are free ā how do you target your UX efforts?Ā by Jocelyn Dart. This is a good in-depth piece which talks about UX, a subject relatively unknown in the SAP world until recently. Thereās an interview with two folks in the SAP UX space, one coming from SAPās Design and Co-innovation centre and information on SAPās UX Advisory service, which is designed to help customers shape their design skills and strategy.
**Fiori Changes Perception of Campus Life! by Rob Jonkers. **Earlier this year I flew to SAP Labs Palo Alto to attend a board meeting in my role as a member of the SAP Developer Advisory Board. While there, I chanced to meet some of the members of HERUG ā the Higher Education & Research User Group. This is an interesting and well established group within the SAP ecosphere, and they have their own focus, goals and direction. But what makes them part of the ecosphere is their common interest in UX and this post captures that very well. The breadth of functional coverage for Fiori is huge.
The Fiori Design Principles by Kai Richter. In my role as an SAP Mentor Iām lucky enough to be able to attend and sometimes speak at some internal events, one of which was DKOM last year, where I saw Kai Richter speak. Kai is part of a large team of designers at SAP who are responsible for the UX that SAP Fiori brings. The members of the UI5 development and design teams are heroes of this new SAP era. This is a short article but captures nicely the principles and the essence of what Fiori means from a chief designerās perspective.
If you have any must-read Fiori articles, let me know!Ā
]]>We recorded it as a Google+ Hangout and published it on YouTube.
Here are some of the things that we covered:
plus a small update to the SAP Fiori App Analysis app - select an app from the list to get a popup with more information, including a link to the official SAP docu for the selected app. (For more background on this app, see another short video "The SAP Fiori App Analysis application" also on YouTube.)
It was a fun 30+ minutes, we hope you enjoy it too!
]]>We'd like to ask you, as members of the general SAP Developer Community, for your thoughts. There isn't a particular agenda or categorisation we'd like to impose here, we just want to make it open enough for you to write what you think.
Here's a form where you can add your thoughts.
We can't of course guarantee that there will be enough time to air all the questions but we'll do our best!
Thank you.
]]>Session
General
SAPUI5
OpenUI5
JSBin
*I realise now why the people watching the codecast as well got āNo dataā later on in their binding display ā itās because I wasnāt using a proxy prefix for the OData service, I was using my Chrome Canary which by default opens with web security disabled, so it just worked for me. More on that in another post!
]]>Hereās a quick list of links to the activities and organisations we mentioned in the talk.
CoderDojoĀ (Our Manchester CoderDojo, is hosted at the fantastic Sharp Project)
And if you need any more convincing about our computational future, you may be interested in this TEDx talk on āOur Computational Futureā.
Share and enjoy!
]]>Here are the links to what was mentioned.
Near the start of the recording, Ian mentioned our previous 2-part SAP CodeTalk on SAPUI5 and Fiori.
I talked about the differences between SAPUI5, OpenUI5 and where they fit with Fiori. Hereās a post explaining that in more detail: āThe essentials: SAPUI5, OpenUI5 and Fioriā.
You can compare whatās available in SAPUI5 and OpenUI5 by looking at their respective API references: SAPUI5 API Reference and OpenUI5 API Reference.
Andreas Kunzās post on SCN āWhat is OpenUI5 / SAPUI5?ā, published when the open sourcing announcement was made. Just before the announcement was made, Jan Penninkhofās post ā13 reasons why SAP should open-source SAPUI5ā was published.
OpenUI5ās āhomeā on the web is Github: http://sap.github.io/openui5/. Thereās also a fledgling blog at http://openui5.tumblr.comĀ with an inaugural āWeāre open!ā post.
Bug reporting for OpenUI5 is possible via Github issues, please read the āReport a Bugā page for more info.
Technical (programming-related) Q&A is active under the āsapui5ā² tag on Stack OverflowĀ (even though the questions are mostly independent of whether itās SAPUI5 or OpenUI5).
Recently the OpenUI5 library was added to the list of selectable libraries in JSBin, and there is a small but growing list of templates for JSBin based snippets too (contributions welcome!)
Last but not least, thereās a Public SAP Mentor Monday webinar this coming Mon 24 Mar 2014 on UI5, with special guest Andreas Kunz. Come along and attend, all are welcome!
]]>Here's some quick news about a small step forward with respect to example and demonstration code snippets: jsbin.com now supports the automatic insertion of the OpenUI5 bootstrap. Select the "Add library" menu option, choose the OpenUI5 entry:
and lo, the bootstrap script tag is inserted, ready for you to go:
Reaching out and bringing the SAP and non-SAP developer communities closer, one small step at a time.
And if you're interested in how this came about, see this pull request on Github: https://github.com/jsbin/jsbin/pull/1220.
Share & enjoy!
]]>The oft unspoken status quo with the SAP technical community is that the members operate within a bubble. It's a very large and comfortable bubble that powers and is powered by the activity within; folks like you and me learning, arguing, corresponding and building within communities like this one - the SAP Community Network. We have SAP TechEd, which is now called d-code. We have SAP Inside Tracks. We have InnoJams, DemoJams and University Alliance events too. Every one of these events, and event types, are great and should continue. But there's a disconnect that I feel is moving closer to the surface, becoming more obvious. This disconnect is that this bubble, this membrane that sustains us, is in many areas non-permeable.
There are folks who operate on both sides of that bubble's surface. Folks that attend technology conferences that are not SAP related. Folks that are involved in developer communities that have their roots outside the SAP developer ecosphere. Folks that write on topics that are not directly related to SAP technologies (but with a short leap of imagination surely are). But these folks are the exception.
SAP's progress in innovation has been slowly turning the company's technology inside out. Moving from the proprietary to the de facto to the standard. Embracing what's out there, what's outside the bubble. HTTP. REST-informed approaches to integration. OData. JavaScript and browser-centric applications. Yes, in this last example I'm thinking of SAP's UI5. In particular I'm thinking about what SAP are doing with OpenUI5 - open sourcing the very toolkit that powers SAP's future direction of UI development. With that activity, SAP and the UI5 teams are reaching out to the wider developer ecospheres, to the developer communities beyond our bubble. If nothing else, we need these non-SAP developers to join with us to build out the next decade.
I try to play my part, and have done for a while. I've spoken at OSCON, JabberConf, FOSDEM and other conferences over the years, and attended others such as Strata and Google I/O too. I've been an active participant in various non-SAP tech communities in areas such as Perl, XMPP and Google technologies. This is not about me though, it's about us, the SAP developer community as a whole. What can we do to burst the bubble, to help our ecosphere and encourage SAP to continue its journey outwards? One example that's close to my heart is to encourage quality Q&A on the subject of UI5 on the Stack Overflow site. But that's just one example.
How can we reach out to the wider developer ecosphere? If we do it, and do it with the right intentions, everybody wins.
Update 04 Mar 2014
The massively popular code sharing and collaboration site jsbin.com now supports OpenUI5 bootstrapping. Read this post for more details. Step by step!
]]>I thought it would be a nice little coffee-time exercise to try and reproduce one of the Fiori app pages shown in the screenshots in that post:
So I did, and as I did it I recorded it to share. I thought I'd write a few notes here on what was covered, and there's a link to the video and the code at the end.
The XML views in this single-page MVC are defined in a special script tag
<script id="view1" type="sapui5/xmlview">
<mvc:View
controllerName="local.controller"
xmlns:mvc="sap.ui.core.mvc"
xmlns="sap.m">
${6:<!-- Add your XML-based controls here -->}
</mvc:View>
</script>
and then picked up in the view instantiation with like this:
var oView = sap.ui.xmlview({
viewContent: jQuery('#view1').html()
})
This is a Fiori UI, so the controls used are from the sap.m library.
https://www.youtube.com/watch?v=RJ8Kg14vhdE
I have of course made the code available, in the sapui5bin repo on Github:
https://github.com/qmacro/sapui5bin/blob/master/SinglePageExamples/PayrollControlCenterMockup.html
Share and enjoy!
]]>UI5. Otherwise known as SAPUI5. But what about OpenUI5? Plus, because no doubt you've heard the word "Fiori" in the same sentence as UI5, where does SAP Fiori fit in? Read on to find out this, and more.
Note that throughout this post, I'm also deliberately using the term UI5, and have been doing in forums on the SAP Community Network, in answers on Stack Overflow and elsewhere for a while now. It's a useful (and short!) umbrella term that encompasses a number of things, all related.
SAPUI5 is the name of the toolkit that SAP has been building for the past three or so years. You'd be forgiven surprise at the length of time it's existed, because it's only really started to gain attention for the last year or so. I wrote about SAPUI5 in May 2012, describing it as "the future direction of SAP UI development", and I stand by my prediction. And the official name? In the same way that the characters in Iain M Banks' masterful science fiction series about The Culture have very long names, and practical short ones too, the official name for SAPUI5 is the "UI Development Toolkit for HTML5" ... which is why most people do refer to it as SAPUI5.
SAPUI5 is a series of core and functionally focused libraries and a runtime environment. The core provides essential services such as module loading and management, eventing, navigation, data management and various application development concepts (such as Model-View-Controller). The libraries provide collections of controls that are used as UI building blocks in apps ā tables, lists, date-pickers, input fields and forms, buttons, and so on. Some controls are simple (like the Button), others are more complex (like the Shell, or the Table), but all work together to provide the interactive components from which applications can be built.
The applications that are built with SAPUI5 are applications that run in the browser. They are HTML5, JavaScript and CSS based. When you invoke an app, the application itself is dowloaded to the browser, along with the SAPUI5 runtime.
There's a theming concept for the controls within SAPUI5 which is why you might have seen different designs in screenshots. The dominant theme so far for desktop-focused controls was "Gold Reflection", where the dominant theme for mobile and responsive controls is "Blue Crystal". You will see a convergence on Blue Crystal for desktop-focused controls ā in fact, if you examine the latest SAPUI5 Software Development Kit (SDK) documentation, you'll notice that this has already happened; the desktop-focused SAPUI5 controls (with which the SDK itself is built) are now themed with Blue Crystal.
OpenUI5 is SAPUI5's sibling. While the use of SAPUI5 is subject to an SAP licence, OpenUI5 is Open Source. This is a big deal, and very important for many reasons, best left for another post. Suffice it to say that in December 2013 SAP surprised us all by open sourcing UI5. The fact that they actually open sourced it wasn't so much of a surprise, many of us outside and inside of SAP were lobbying for it to happen. What surprised us was how quickly they turned it around (well done SAP!).
OpenUI5 has its own SDK, and its own presence on the web on GitHub, which is currently the most important place for Open Source projects such as this. SAP has a way to go yet in fully embracing all of the Open Source concepts, but it's getting there, and the all important first step has already been taken. SAP and developers like me can start to more properly engage with Open Source developers outside the SAP ecosphere, developers with skills and expertise in UI/UX and many other areas. One of the ways SAP will continue to be relevant is by reaching out in this direction.
There are a number of differences between OpenUI5 and SAPUI5, mostly related to libraries that are currently missing from the Open Source version. But the essentials (sap.ui, sap.m) are there. If you've written a UI5 powered app, as long as it doesn't use charting, for example, there's a good chance that you can just switch the toolkits and it will still work. Of course, there's more to the detail, but that gives you a rough idea.
Aaahh, Fiori. Let a thousand meanings bloom! What Fiori is and isn't, is again the subject for a long post of its own. But it's important to include Fiori here in this rundown, because of its close relation to UI5.
SAP Fiori is a series of app suites, being introduced in waves. The apps in these waves are written by SAP app developers. But Fiori is also a development approach, a design approach, which has a number of soft constraints. And when an app is built to conform to those constraints, it exhibits Fiori-like features: simple and recognisable design, easy to use, a role-based approach, and following one of a core set of UI patterns.
And crucially, Fiori apps are built with UI5. More specifically, they use specific libraries from the UI5 toolkit, the most significant one by far being "sap.m". The "m" in "sap.m" stands for "mobile", but as we know, responsive is the new mobile, and this is essential in ensuring that Fiori apps run on all devices ā smartphones, tablets and desktops, as the sap.m UI5 controls are designed from the ground up to work responsively.
So putting these two observations together ā that Fiori is a design and development approach and set of constraints, and Fiori apps are built with UI5, it stands to reason that you too, as an SAP customer, can build your own Fiori apps. With some expert help and guidance, you can join in the UI/UX renewal yourself and if SAP don't offer an app that suits your requirements, you can build one yourself. Not only that, but if you build it right, it will happily live and run inside the Fiori Launchpad alongside the SAP-delivered apps.
So there you have it. Hopefully if you've read this far, you'll have a better understanding of the terms, and how what the terms represent relate to each other. You may like to know that there's a public SAP Mentor Monday session on UI5 that I'm arranging and hosting on Mon 24 Mar 2014. I'll be joined by special guest Andreas Kunz, from the UI5 development team in Walldorf. All are welcome.
And if you'd like to hear more about SAP's open sourcing of UI5, or Fiori development, leave me a comment below!
UPDATE 25/03/2014: Links to the recording and items mentioned in this session are available.
For those of you who don't know, a public SAP Mentor Monday is an hour-long webinar format where everyone is invited and the subject is a specific topic. The subject of this public SAP Mentor Monday is UI5. That is to say, SAPUI5 and OpenUI5 - the licenced and open source versions both. UI5 is the toolkit that is powering the UI/UX revolution at SAP, and we have a special guest that will join us from the UI5 team in Walldorf - Andreas Kunz.
These are exciting times for SAP, and for me there's no place nearer the epicentre of the visible renewal than UI5.
Join the webinar to hear about and discuss UI5, with folk who share your interest. I'll be hosting it, and if you want to submit questions in advance, you can do so using this form.
]]>What was the cause of WAP and WML's failure? For many, it was that the application protocol (WAP) and markup language (WML) were custom designed for specific target devices. Mobile phones. Mobile phones turned into smartphones, Edge turned into 3G; essentially, the device in our pocket became a pretty well-connected small computer.
Now, I've nothing against applications that are written and delivered for specific platforms such as the current iOS, Android, Blackberry and FirefoxOS (I saw the latter in evidence at FOSDEM, the Free and Open-Source Developers' European Meetup in Brussels last weekend). But it does occur to me that this is, in a way, hedging your bets and doubling (at least) your development efforts. Of course, you may have guessed by now that what I'm thinking of is HTML5. The Web. Browsers on our smartphones, whether native or embedded within a hybrid container such as Cordova (nƩe PhoneGap) are extremely capable and in many ways the same as what we have on our other, larger devices - tablets and desktops.
And indeed there's the thing that brings us back to the title of this post, and the word 'responsive'. What do all the platforms (smartphone, tablet, desktop) have in common? You can build an app, once, and have it run on all these platforms, where it will reform itself: User interface (UI) elements being rearranged, wide columnar displays collapsing into more appropriate structures, and touch-related navigation mechanisms appearing or disappearing. How do you do that? You build for the Web. Yes, capital 'W'. It's that important, and always has been. Build for the Web, use modern techniques so that your application looks, feels and works 'just right' regardless of the form factor of the device you users happen to be accessing it upon.
Guess what? That's exactly what SAP is doing with SAP Fiori. In large-scale efforts to renew the User Experience (UX) of the backend business suite functionality, SAP has adopted this very approach. Run a SAP Fiori app on a smartphone, on a tablet, on your desktop, and you will see what I mean. Moreover, build your own Fiori apps, and as long as you follow certain design and technical guidelines - which the SAP Fiori app developers inside SAP have been following - your apps will respond the same too.
Look under the hood of the SAP Fiori apps and you'll see the UI engine that is powering it all: SAPUI5. SAPUI5 is a large toolkit that contains, amongst other things, a number of control libraries, one of which is 'sap.m'. The 'm' originally stood for mobile, but it stands for a whole lot more in reality. This 'sap.m' library contains the UI controls, the building blocks, from which the SAP Fiori apps are built. And these controls are all designed and written from the ground up to be responsive. So that they 'do the right thing' on whatever platform you use them.
So consider taking a leaf out of SAP's book when thinking about your mobile strategy. Don't think 'mobile', think 'responsive'.
How to be a mentor: A thoughtfully written post with a lot of good suggestions for guiding a mentee along the right path. Let mentees set the agenda for meetings; allow them the occasional mistake (great learning); help them to help themselves by providing strategies for discovering the solution, rather than direct answers; use your experience to help them sort the wheat from the chaff as far as online content is concerned.
Getting access to SAP Fiori trial: many obstacles: Unfortunately the obstacle phenomenon is not a new thing; SAP seem to constantly struggle to make easy the things that should be easy. And in this case itās commercially disadvantageous for them, hindering customer from trialling Fiori. This is one example of many instances where SAP really need to get a grip and learn from other presences on the Web (another is the SAP ID Service, but thatās a story for another time).
Why you donāt need an Enterprise Service Bus (ESB): This article made me smile, as itās a simple piece but has a very strong impact. There are too many architecture astronauts out there (I for one have had my share of overengineered, overcomplex and underthought designes pushed in my face from them in my career) and I can imagine this piece being a lovely little wake up call to all those who have seen the classic āESB icon [seemingly] pre-painted on their whiteboardsā.
The many languages native to Britain: A fascinating piece, not only because of the myriad languages that are still alive within our shores (and beyond, it seems) but also because of the difficulty (futility?) in classification. What is a language, what is a dialect? What is native and what is immigrant? When do these classifications change? Who says? (Joseph ā this is the piece I was telling you about).
King has trademarked the word CANDY (and youāre probably infringing): I read this piece probably with my mouth wide open. It beggars belief that the US Trademark Office bureaucrats are stupid enough to cause this to happen. Itās one thing for a greedy and self-centred games company to apply for a trademark like them (good luck to them, bold as brass and all) but itās another for the ridiculous request to be granted. Good grief.
Stack Overflowās About Page: Iāve recently started to become active on Stack Overflow in the UI5 area, in the light of OpenUI5 and our reachout to the wider non-SAP developer ecosphere(s). The reason Stack Overflow is such a success is because of the quality of its content, and the reason for the content quality is the conduct expected. This conduct is explained concisely in the About page, and thereās more information in the Help sections too. After struggling with SCNās software for years, and trying to decipher hazy and incomplete questions so that I might answer them, it looks like Stack Overflow will be a breath of fresh air.
So there you have it. I really enjoyed each of these articles, perhaps youāll find something there too.
]]>The other day, Andreas Kunz pointed to an overview of the MVC options which contains very detailed information - an interesting and recommended read. One of the things that piqued my interest was the ability, in XML views, to specify a resource bundle (for internationalisation) declaratively, using a couple of attributes of the root View element. This I thought was rather neat.
So further to my recent explorations and posts on XML views ā¦
ā¦ I thought I'd put together a little runnable app and make it available on sapui5bin, to demonstrate it. The result is XMLResourceBundleDeclaration, which is an index file, instantiating an XML view that has the resourceBundle declaration in it; this points to the resourceBundle.properties file in the i18n folder where you might expect to find it in other apps too.
The runnable is here: https://github.com/qmacro/sapui5bin/tree/master/XMLResourceBundleDeclaration
Share and enjoy!
]]>Recently John Patterson supplied a JSBin example of an OData sourced table with a filter on dates, in answer to Using Table filter when a formatter function is used.
This was a very nice example but I thought it would be an interesting exercise to convert it to XML, for a number of reasons:
So I did, and have made it available in the sapui5bin repo on Github here:
sapui5bin/SinglePageExamples/ODataDateTableFilter.html at master Ā· qmacro/sapui5bin Ā· GitHub
Open up this link in a separate window to view it and read the rest of the post.
I'll cover the "single page MVC" concept in another post; for now, here are a few notes to help you navigate:
sap.ui.xmlview({ viewContent: jQuery('#view1').html() })
Anyway, I thought this might be more useful insight into XML views and single page MVC examples.
Share & enjoy!
Note 1: This "single page MVC" idea is something I've wanted to put together and share for a while; it's easy to write a single page demo UI5 app but not so easy to do that and involve the MVC concept as well - in a single file ā¦ until now.
Note 2: The SAP Fiori Wave 1 apps have views that are written declaratively in HTML; the SAP Fiori Wave 2 and 3 apps have views written in XML, using the sap.m controls, with a smattering of sap.ui.layout controls too
]]>So I decided to put my money where my mouth is and write this document.
There are many examples in the SCN SAPUI5 Developer Center where people are posting questions asking for help with code, and where they don't supply enough information, background, context, or - crucially - code. If you have a problem with your code and want help from other people, help us to help you by sharing the code you're having problems with.
There are good ways and bad ways to share code. Here are a few tips:
Unless you're asking questions about, for example, specific syntax or code patterns, don't just post code snippets, and make us guess the rest. Post all of your code. Even the parts that you might not think are relevant. If you're experiencing problems, and don't post all of the code, you're second-guessing the cause, and not helping yourself or us. Remember, we haven't been working on your codebase and so don't have the mental context that you have.
If you can't share all of your code for some reason (intellectual property, security, whatever) then reduce the problem to its core essence and post that - but again, post a complete example. Often, going through the exercise of reproducing the problem in the smallest instance possible leads you to realise what the problem is, and you may not need to ask for help. But if you do, you have at least something to show the people who can help.
Posting large (or even small) chunks of code inside the body of a forum question here on SCN is not that helpful. The syntax highlighting, formatting and font choice that this environment offers as default is not conducive to reading code. Further, posting your code like that makes it that much more difficult for your helpers to marshal it into something that they could run locally to see if they could diagnose the problem themselves. The one exception is where you're providing an initial bit of context. And if you do that, make sure you use the syntax highlighting provided by the SCN Jive editor. It's not brilliant, but it's better than nothing.
ZIP files of complete applications are better, but they're still very cumbersome - you have to download them in your browser, unzip the files, choose a directory, and so on. And nobody can read the code at their leisure, or get a quick understanding of what's going on.
Best of all, at least IMHO, is to create a Gist on Github. This puts the code centre stage, treats it as a first class citizen on the Web (you can address whole applications, individual files, or even individual lines, with their own URLs) and what's more, it's one command to pull the entire codebase of an application to a local directory and start working on it immediately. If nothing else, sharing the code you want help with as a Gist on Github puts the onus on you, who are seeking help (rather than your potential helpers, who are offering help) to marshal the code so that it can be properly diagnosed.
Here's a recent example of where someone had a problem with his application and asked for help, posting not only a formatted snippet to provide initial context:
but also a Gist with a complete working runnable example that highlighted the problem he was having:
Uwe did exactly the right thing. The Gist he created and shared - "Binding Problem with UI5 and XML views" was complete, didn't omit anything, and was runnable. It took me less than a minute to grab the code and get it running and confirm what the issue was. This particular problem wasn't a big issue, but there are more complex problems that are presented in this area on SCN that are very difficult to diagnose because not only is the code not shared, nor a complete description of the issue given, but also the problem is complex in that it involves relationships between different components and files ā¦ which are often missing.
With a Gist, not only can the whole application be downloaded quickly and easily, but also you can review the code in properly formatted and syntax-highlighted fashion, and even point to certain lines (like the last line of the bootstrap, which was missing the data-sap-ui-xx-bindingSyntax setting)
Here's a short screencast of how that shared code, in a Gist, is very easy to pull down locally, fire up and start to diagnose.
https://www.youtube.com/watch?v=Fgp_e3Uv5Xs
Hat tip to my son Joseph Adams, who first showed me that Gists could contain more than one file, and for pointing out that they were normal git repos.
]]>I have a great relationship with colleges in Manchester and next month, with my STEMnet Ambassador hat on, I'll be spending a morning with a group of 24 ICT/Computing teachers from high schools and colleges in the area, to teach them about the Raspberry Pi.
But, what exactly should we be teaching the teachers, to help them educate our kids for our computational future?
A Raspberry Pi is a small, cheap, fully functional computer, slightly bigger than a credit card. It has, in some ways, revolutionised, or perhaps re-invigorated the grass roots computer club style enthusiasm that we experienced decades ago, when the first 8-bit microcomputers such as the Acorn Atom, Sinclair ZX81 and Commodore VIC 20 appeared on the scene. It is to many the perfect platform for a new generation of software and hardware hackers alike (there are many ways you can easily interface the Pi with external devices such as sensors and switches) for a number of reasons:
There are various distributions of GNU/Linux available for the Pi, and they're super-easy to install onto the SD card which functions as the Pi's hard disk equivalent. With these distributions come many software packages over and above the operating system itself. These packages include programming environments such as Scratch, technical computing software systems such as the Wolfram language & Mathematica, and various languages.
There are almost too many languages to mention, and those are just the ones that come out of the box! In a session with a class of students at Xaverian 6th Form College that I ran late last year a group were interested in using Pascal on the Pi, as they were studying that language. Pascal wasn't immediately available, but with a single line we retrieved and installed a free Pascal compiler and they were up and running less than a minute later. But for teaching, I reach for Python more often than not. It's a wonderful language: calm, precise, flexible, and one with which you can learn and write code in different styles (such as procedural, object-oriented, and functional).
So the session next month is to teach the teachers about the Pi ā what it can do, and how can it be used in lessons and for coursework.
The key thing to remember that the Pi is a means to an end, not necessarily an end in itself. On a few occasions I've picked up the sentiment "OK, we now have some Raspberry Pis ā education job done!" Unfortunately that's not entirely the case. At one level, giving the kids confidence to pick up a circuit board, connect it up and boot an operating system is a great thing to do. But what to do once that achievement has been unlocked? My aim is to help teach children computational thinking, to be able to survive and flourish in our data-driven future, and that means learning analytical, data and programming skills. That doesn't necessarily mean dry science - in fact, far from it. Computational thinking involves rigor, but it also involves creative thinking and problem solving. Computing is to be found all along a wide spectrum, with science at one end and art at the other.
And this is exactly where the teachers need our help. They need to understand what the Pi is capable of, what Linux makes available to them, what they can use it for, how to involve it in lessons. They're hungry to learn, with a view to passing that knowledge on to our kids. Sometimes, starting with a blank piece of paper is the hardest thing ā some direction is needed. That's where we come in. Here are my thoughts and plans of what I'm going to share with them on that day next month. The challenge is always the same: so much to show, so little time. So the focus has to be the best it can be:
Scratch and Python are examples of languages where computing can be made an integral part of the problem solving process. In the past, with Scratch, I've taken primary school kids through the process of prime number determination with Scratch (they'd just learned about the concept of primes in class), and given secondary school kids a taste of games programming and 2D mechanics. I've used Python with kids as a language to solve mathās puzzles such as those presented on Project Euler.
But as perhaps a person in business, what do YOU think? What skills do you think we should be teaching our kids so that they have the best chance of survival in the future? If computational thinking is "the fourth R" (after reading, writing and arithmetic), what do you think they would benefit from in relation to the world of business? For me, teaching kids how to use MS-Word and MS-Excel as the pinnacle of the computing curriculum is just not good enough. I'd love to know your thoughts.
Keeping them in sync is a struggle that I avoid, but thereās recently been a specific case where I do want to make the effort, and thatās time / location scheduling ā where Iāll be, on what days. This information needs to be in all calendars, to share with work colleagues, and for my own sanity (I see my Google calendar as my master instance in this case).
So I wrote a quick Google Apps Script hack to allow me to quickly specify the where/when events in a particular Google calendar that I maintain (called āWorkā), and then have those events distributed, via invitations, to my other Outlook identities. Itās not very sophisticated ā all it does isĀ look through the Work calendar for all-day events matching a certain title pattern, thenĀ look at the guest list; if the emails I want to share the event with are not already on the list, theyāre added, and invites are sent.
Because of an issue with the CalendarService component of Google Apps Script (thatās been open since May 2011!) you canāt directly cause invites to be sent to new guests that you add to an existing event. So one workaround is to create the calendar invitation (ICS file) yourself and email it. Itās pretty simple (and Romain Vialard has an example in the commentary on the issue).
Actually creating the ICS file myself was a solution to not one but two issues. The ICS files that are auto-generated by Google Calendar when you add a guest via the UI contain entries like this for all-day events:
DTSTART;VALUE=DATE:20140305
DTEND;VALUE=DATE:20140306
Unfortunately Outlook makes an incorrect guess as to the timezone for these dates, based on its own timezone, and when the recipient calendar is in a different timezone than the originating event (in my case UTC and UTC+1), it causes the replicated entry to be skewed by an hour, causing the all-day event to span the wrong days! So being in control of the ICS content generation means I can be more explicit which in turn means that Outlook doesnāt get it wrong.
DTSTART:20140305T000000Z
DTEND:20140306T000000Z
So thatās it. Iāve made the script available as a Gist on Github and hereās a screenshot of some typical results.
Share & enjoy!
]]>We know, from the other posts in this series, that there are a number of views. Let's just take them one by one. If you want an introduction to XML views, please refer to the previous post Mobile Dev Course W3U3 Rewrite - XML Views - An Intro. I won't cover the basics here.
The App view contains an App control (sap.m.App) which contains, in the pages aggregation, the rest of the views - the ones that are visible. This is what the App view looks like in XML.
<?xml version="1.0" encoding="UTF-8" ?>
<core:View controllerName="com.opensap.App" xmlns:core="sap.ui.core"
xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc">
<App id="app">
<mvc:XMLView viewName="com.opensap.Login" id="Login" />
<mvc:XMLView viewName="com.opensap.ProductList" id="ProductList" />
<mvc:XMLView viewName="com.opensap.ProductDetail" id="ProductDetail" />
<mvc:XMLView viewName="com.opensap.SupplierDetail" id="SupplierDetail" />
</App>
</core:View>
We're aggregating four views in the App control (introduced by the <App> tag). Because the pages aggregation is the default, we don't have to wrap the child views in a <pages> ā¦ </pages> element. Views and the MVC concept belong in the sap.ui.core library, hence the xmlns:core namespace prefix usage.
The Login view contains, within a Page control, a user and password field, and a login button in the bar at the bottom. This is what the XML view looks like.
<?xml version="1.0" encoding="UTF-8" ?>
<core:View controllerName="com.opensap.Login" xmlns:core="sap.ui.core"
xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc">
<Page
title="Login"
showNavButton="false">
<footer>
<Bar>
<contentMiddle>
<Button
text="Login"
press="loginPress" />
</contentMiddle>
</Bar>
</footer>
<List>
<InputListItem label="Username">
<Input value="{app>/Username}" />
</InputListItem>
<InputListItem label="Password">
<Input value="{app>/Password}" type="Password" />
</InputListItem>
</List>
</Page>
</core:View>
You can see that the Page control is the 'root' control here, and there are a couple of properties set (title and showNavButton) along with the footer aggregation and the main content. Note that as this is not JavaScript, values that you think might appear "bare" are still specified as strings - showNavButton="false" is a good example of this.
The Page's footer aggregation expects a Bar control, and that's what we have here. In turn, the Bar control has three aggregations that have different horizontal positions, currently left, middle and right. We're using the contentMiddle aggregation to contain the Button control. Note that the Button control's press handler "loginPress" is specified simply; by default the controller object is passed as the context for "this". You don't need to try and engineer something that you might have seen in JavaScript, like this:
new sap.m.Button({
text: "Login",
press: [oController.loginPress, oController]
}),
ā¦ it's done automatically for you.
Note also that we can use data binding syntax in the XML element attributes just like we'd expect to be able to, for example value="{app>/Username}".
In the ProductList view, the products in the ProductCollection are displayed. There's a couple of things that are worth highlighting in this view. First, let's have a look at the whole thing.
<?xml version="1.0" encoding="UTF-8" ?>
<core:View controllerName="com.opensap.ProductList" xmlns:core="sap.ui.core"
xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc">
<Page
title="Products">
<List
headerText="Product Overview"
items="{
path: '/ProductCollection'
}">
<StandardListItem
title="{Name}"
description="{Description}"
type="Navigation"
press="handleProductListItemPress" />
</List>
</Page>
</core:View>
The List control is aggregating the items in the ProductCollection in the data model. Note how the aggregation is specified in the items attribute - it's pretty much the same syntax as you'd have in JavaScript, here with the 'path' parameter. The only difference is that it's specified as an object inside a string, rather than an object directly:
items="{
path: '/ProductCollection'
}"
So remember get your quoting (single, double) right.
And then we have the template, the "stamp" which we use to produce a nice visible instantiation of each of the entries in the ProductCollection. This is specifiied in the default aggregation 'items', which, as it's default, I've omitted here.
By now I'm sure you're starting to see the pattern, and also the benefit of writing views in XML. It just makes a lot of sense, at least to me. It's cleaner, it makes you focus purely on the controls, and also by inference causes you to properly separate your view and controller concerns. You don't even have the option, let alone the temptation, to write event handling code in here.
So here's the ProductDetail view.
<?xml version="1.0" encoding="UTF-8" ?>
<core:View controllerName="com.opensap.ProductDetail" xmlns:core="sap.ui.core"
xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc">
<Page
title="{Name}"
showNavButton="true"
navButtonPress="handleNavButtonPress">
<List>
<DisplayListItem label="Name" value="{Name}" />
<DisplayListItem label="Description" value="{Description}" />
<DisplayListItem label="Price" value="{Price} {CurrencyCode}" />
<DisplayListItem
label="Supplier"
value="{SupplierName}"
type="Navigation"
press="handleSupplierPress" />
</List>
<VBox alignItems="Center">
<Image
src="{app>/ES1Root}{ProductPicUrl}"
decorative="true"
densityAware="false" />
</VBox>
</Page>
</core:View>
We're not aggregating any array of data from the model here, we're just presenting four DisplayListItem controls one after the other in the List. Below that we have a centrally aligned image that shows the product picture.
And finally we have the SupplierDetail view.
<?xml version="1.0" encoding="UTF-8" ?>
<core:View controllerName="com.opensap.SupplierDetail" xmlns:core="sap.ui.core"
xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc">
<Page
id="Supplier"
title="{CompanyName}"
showNavButton="true"
navButtonPress="handleNavButtonPress">
<List>
<DisplayListItem label="Company Name" value="{CompanyName}" />
<DisplayListItem label="Web Address" value="{WebAddress}" />
<DisplayListItem label="Phone Number" value="{PhoneNumber}" />
</List>
</Page>
</core:View>
Again, nothing really special, or specially complicated, here. Just like the other views (apart from the "root" App view), this has a Page as its outermost control. Here again we have just simple, clean declarations of what should appear, control-wise.
So there you have it. For me, starting to write views in XML was a revelation. The structure and the definitions seem to more easily flow, so much so, in fact, that in a last-minute addition to the DemoJam lineup at the annual SAP UK & Ireland User Group Conference in Birmingham last week, I took part, and for my DemoJam session I stood up and build an SAP Fiori-like UI live on stage. Using XML views.
This brings to an end the series that started out as an itch I wanted to scratch: To improve the quality of the SAPUI5 application code that was presented in the OpenSAP course "Introduction To Mobile Solution Development". There are now 6 posts in the series, including this one:
I hope you found it useful and interesting, and as always,
]]>One of the features of the app that the participants build in the CD168 sessions at SAP TechEd Amsterdam is a list of sales orders that can be grouped according to status or price (the screenshot shows the orders grouped by price).
This is achieved by specifying a value for the vGroup parameter on the Sorter, as documented in the sap.ui.model.Sorter API reference:
Configure grouping of the content, can either be true to enable grouping based on the raw model property value, or a function which calculates the group value out of the context (e.g. oContext.getProperty("date").getYear() for year grouping). The control needs to implement the grouping behaviour for the aggregation which you want to group.
So what this means is that you either specify a boolean true value, or the name of a function.
Here in the example in the screenshot on the left, we're using a custom grouper function to arrange the sales orders into value groups (less than EUR 5000, less than EUR 10,000 and more than EUR 10,000).
But what if you wanted to influence not only the sort but also the order of the groups themselves? Specifically in this screenshot example, what if we wanted to have the "< 5000 EUR" group appear first, then the "> 10,000 EUR" group and finally the "< 10,000 EUR" group? (This is a somewhat contrived example but you get the idea). This very question is one I was asking myself while preparing for the CD168 session, and also one I was asked by an attendee.
To understand how to do it, you have to understand that the relationship between the sorter and the grouper can be seen as a "master / slave" relationship. This is in fact reflected in how you specify the grouper - as a subordinate of the master.
The sorter drives everything, and the grouper just gets a chance to come along for the ride.
So to answer the question, and to illustrate it in code step by step, I've put together an example. It takes a simple list of numbers 1 to 30 and displays them in a list, and groups them into three size categories. You can specify in which order the groups appear, but the key mechanism to achieve this, as you'll see, is actually in the sorter.
To understand further, you have to remember that there's a simple sorter specification and a more complex one. Using a simple sorter is often the case, and you'd specify it like this:
new sap.m.List("list", {
items: {
path: '/records',
template: new sap.m.StandardListItem({
title: '{amount}'
}),
sorter: new sap.ui.model.Sorter("amount") // <---
}
})
This is nice and simple and sorts based on the value of the amount property, default ascending.
The complex sorter is where you can specify your own custom sorting logic, and you do that by creating an instance of a Sorter and then specifying your custom logic for the fnCompare function.
We'll be using the sorter with its own custom sorting logic.
So here's the example, described step by step. It's also available as a Gist on Github: Custom Sorter and Grouper in SAPUI5 and exposed in a runtime context using the bl.ocks.org facility: http://bl.ocks.org/qmacro/7702371.
As the source code is available in the Gist, I won't bother showing you the HTML and SAPUI5 bootstrap, I'll just explain the main code base.
var sSM = 10; // < 10 Small
var sML = 15; // < 15 Medium
// 15+ Large
Here we just specify the boundary values for chunking our items up into groups. Anything less than 10 is "Small", less than 15 is "Medium", otherwise it's "Large". I've deliberately chosen groupings that are not of equal size (the range is 1-30) just for a better visual example effect.
// Generate the list of numbers and assign to a model
var aValues = [];
for (var i = 0; i < 30; i++) aValues.push(i);
sap.ui.getCore().setModel(
new sap.ui.model.json.JSONModel({
records: aValues.map(function(v) { return { value: v }; })
})
);
So we generate a list of numbers (I was really missing Python's xrange here, apropo of nothing!) and add it as a model to the core.
// Sort order and title texts of the S/M/L groups
var mGroupInfo = {
S: { order: 2, text: "Small"},
M: { order: 1, text: "Medium"},
L: { order: 3, text: "Large"}
}
Here I've created a map object that specifies the order in which the Small, Medium and Large groups should appear in the list (Medium first, then Small, then Large). The texts are what should be displayed in the group subheader/dividers in the list display.
// Returns to what group (S/M/L) a value belongs
var fGroup = function(v) {
return v < sSM ? "S" : v < sML ? "M" : "L";
}
This is just a helper function to return which size category (S, M or L) a given value belongs to.
// Grouper function to be supplied as 3rd parm to Sorter
// Note that it uses the mGroupInfo, as does the Sorter
var fGrouper = function(oContext) {
var v = oContext.getProperty("value");
var group = fGroup(v);
return { key: group, text: mGroupInfo[group].text };
}
Here's our custom Grouper function that will be supplied as the third parameter to the Sorter. It pulls the value of the property from the context object it receives, uses the fGroup function (above) to determine the size category, and then returns what a group function should return - an object with key and text properties that are then used in the display of the bound items.
// The Sorter, with a custom compare function, and the Grouper
var oSorter = new sap.ui.model.Sorter("value", null, fGrouper);
oSorter.fnCompare = function(a, b) {
// Determine the group and group order
var agroup = mGroupInfo[fGroup(a)].order;
var bgroup = mGroupInfo[fGroup(b)].order;
// Return sort result, by group ...
if (agroup < bgroup) return -1;
if (agroup > bgroup) return 1;
// ... and then within group (when relevant)
if (a < b) return -1;
if (a == b) return 0;
if (a > b) return 1;
}
Here's our custom Sorter. We create one as normal, specifying the fact that we want the "value" property to be the basis of our sorting. The 'null' is specified in the ascending/descending position (default is ascending), and then we specify our Grouper function. Remember, the grouper just hitches a ride on the sorter.
Because we want to influence the sort order of the groups as well as the order of the items within each group, we have to determine to what group each of the two values to be compared belong. If the groups are different, we just return the sort result (-1 or 1) at the group level. But if the two values are in the same group then we have to make sure that the sort result is returned for the items themselves.
// Simple List in a Page
new sap.m.App({
pages: [
new sap.m.Page({
title: "Sorted Groupings",
content: [
new sap.m.List("list", {
items: {
path: '/records',
template: new sap.m.StandardListItem({
title: '{value}'
}),
sorter: oSorter
}
})
]
})
]
}).placeAt("content");
And that's pretty much it. Once we've done the hard work of writing our custom sorting logic, and shared the group determination between the Sorter and the Grouper (DRY!) we can just specify the custom Sorter in our binding of the items.
And presto! We have what we want - a sorted list of items, grouped, and those groups also in an order that we specify.
There was a comment on this post which was very interesting and described a situation where you want to sort, and group, based on different properties. This is also possible. To achieve sorting on one property and grouping based on another, you have to recall that you can pass either a single Sorter object or an array of them, in the binding.
So let's say you have an array of records in your data model, and these records have a "beerName" and a "beerType" property. You want to group by beerType, and within beerType you want the actual beerNames sorted.
In this case, you could have two Sorters: One for the beerType, with a Grouper function, and another for the beerName. Like this:
var fGrouper = function(oContext) {
var sType = oContext.getProperty("beerType") || "Undefined";
return { key: sType, value: sType }
}
new sap.m.App({
pages: [
new sap.m.Page({
title: "Craft Beer",
content: [
new sap.m.List("list", {
items: {
path: '/',
template: new sap.m.StandardListItem({
title: "{beerName}",
description: "{beerType}"
}),
sorter: [
new sap.ui.model.Sorter("beerType", null, fGrouper),
new sap.ui.model.Sorter("beerName", null, null)
]
}
})
]
})
]
}).placeAt("content");
I've put a complete example together for this, and it's in the sapui5bin Github repo here:
https://github.com/qmacro/sapui5bin/blob/master/SortingAndGrouping/TwoProperties.html
And while we're on the subject of code examples, there's a complete example for the main theme of this post here:
https://github.com/qmacro/sapui5bin/blob/master/SortingAndGrouping/SingleProperty.html
Share & enjoy!
]]>But this is exactly where the Demo Jam competition this year took me ā the intersection between customer and technology, in other words, the user experience (UX).
The Demo Jam is an event within this conference and other conferences (such as SAP TechEd) where there are a series of teams giving short sharp bursts of presentation. There are simple rules: Live, no slides, and over in 5 mins. And a winner is voted by the audience with the help of a āclapometerā. Itās a bit of fun, but also has a serious side to it: The aim is to wow the audience with something relevant.
This year there were four teams due to participate but unfortunately one had to drop out at the last minute. To cut a long story short, I got a call on theĀ Friday before theĀ SundayĀ conference start asking if I could step in, put something together and take the place of the team that had dropped out.
Already with fairly full plans for the weekend, I found some spaceĀ on SundayĀ (I was only going to be attending the conferenceĀ on Monday) and thought about what I could do. With my current work at the SAP Mothership (i.e. Walldorf) with the SAPUI5 / Fiori teams, the answer came quite quickly: Show the audience what makes SAP Fiori apps tick, whatās under the hood ā¦ by building an SAP Fiori-like UI live on stage in 5 mins.
An updated version of an old TV / stage adage goes something like this:
āNever work with children or animals, or do live coding, unless youāre a foolā.
Being a fool, and with no children or animals around, I went for the third option and wrote XML in front of hundreds of people, instantiating SAPUI5 controls and building an SAP Fiori-like app before their eyes (classic design ā master/detail showing sales orders and details). It was made slightly more āinterestingā than it might otherwise have been by the fact that my hands were really cold, and fingers inflexible (Iād recently arrived and it had been very cold outside) ā not ideal for typing under pressure.The key thing I wanted to get across was that there was no mystery around SAP Fiori; apps are created from building blocks like everything else ā in this case building blocks in the SAPUI5 framework. Itās important to help folks understand what Fiori is, what it isnāt, and what it might be. A major part (but by no means the entire part) of what it is ā¦ is a set of applicationsĀ built in an outside-in fashion using a modern UI framework (SAPUI5) that has a super design pedigree and which for its young age is extremely accomplished already.
The majority of the audience had heard of Fiori, which was great, and hopefully after my Demo Jam entry they understand a little bit more of what makes Fiori apps tick, and are better armed to ask the right questions and make the right decisions.
I was totally honoured to be part of Demo Jam this year, the other entries were great (everything from immersive virtual reality with big data, through automated training solutions to compliance systems) but perhaps largely due toĀ [eddies in the space time continuum](http://en.wikiquote.org/wiki/The_Hitchhiker's_Guide_to_the_Galaxy#Chapter_2_3), I won!In a way, the fact that I only had a few hours to come up with something and prepare my entry made it quite a fun experience ā¦ and Iām already looking forward to seeing the entries next year!
]]>In this part, we discuss SAPUI5 and SAP Fiori in general, and talk about the relation between these two things, major features in SAPUI5, including the automatic module loading system, the data model mechanisms, and in particular OData. We also talk about the architecture and startup of a very simple app.
Questions covered:
In this part we dig a little deeper, and talk about what a more complex app looks like. There's an example custom Fiori app, built using the Component concept and there's an 11 minute screencast that walks through that app, the controls used, and then looks under the hood to see how it's put together (bootstrap, parameters, Component and ComponentContainer, index.html, Component.js, views (JavaScript & XML) and controllers, View containing the SplitApp control, custom utility functions, internationalisation, folder structure, and more).
Further questions covered:
If you just want a future reference to the screencast, it's available separately here too: https://www.youtube.com/watch?v=tfOO4szA2Bg.
Share and enjoy!
]]>To understand where XML views fit in, let's take a look at this diagram that highlights the Model-View-Controller (MVC) support that SAPUI5 has.
Views can be written in JavaScript, HTML, JSON or XML.
Recently I've had a good chunk of work that involved writing XML views, and I can honestly say that in contrast to the received wisdom that has XML as "generally verbose and clunky", writing views in XML is both concise and very pleasant, not to mention satisfyingly declarative.
Let's get an idea of what an XML view looks like, and contrast it with the JavaScript equivalent. We'll keep it deliberately simple - this is what it looks like:
We're using the sap.m library controls: An App, containing a Page, which has a Text as the only main content, and a Bar, containing a Button, as the footer.
This is what the view looks like declared in XML.
<core:View xmlns:core="sap.ui.core" xmlns="sap.m">
<App>
<Page title="Greetings">
<Text text="Hello World" />
<footer>
<Bar>
<contentRight>
<Button text="Edit" />
</contentRight>
</Bar>
</footer>
</Page>
</App>
</core:View>
And this is what it looks like declared in JavaScript. Note that I've deliberately avoided any unnecessary verbosity by not declaring intermediate variables to hold the different controls, as is common in many examples.
sap.ui.jsview("com.opensap.Page", {
createContent: function(oController) {
return new sap.m.App({
pages: [
new sap.m.Page({
title: "Greetings",
content: [
new sap.m.Text({
text: "Hello World"
})
],
footer: new sap.m.Bar({
contentRight: new sap.m.Button({
text: "Edit"
})
})
})
]
});
}
})
Now, this is not a competition between the two, but I know which I prefer. The XML view is simpler to scan and it's clearer to see what controls are being used, and what relation they have to each other. It's also slightly less verbose than the JavaScript view. This conciseness is accentuated with larger views - the concise nature of the declarative syntax remains and scales in XML.
Let's have a look in a bit more detail at that XML view. Notice first that there are namespace definitions. This is standard XML stuff, and affords us prefixes to specify from what SAPUI5 library the specific control is from. Here we have a couple of prefix namespaces - "core", for the sap.ui.core library, and the default namespace (no prefix) for the sap.m library. It makes sense in this case (and for the Fiori apps) to have the default namespace set for sap.m as that's where the majority of the controls come from, and therefore the majority of the XML element names won't need a prefix. Note the "core" prefix is used on the root element itself: "core:View".
Once we've declared our namespaces, we're ready to declare the controls that we want to use. And for the App control, for example, it's as simple as an opening and closing element pair: "
Note that the controls we mentioned are represented by XML elements that are capitalised (View, App, Page, Text, Bar, Button).
But what about the other stuff? What's that "", for example? That's not a control, nor is it capitalised. It's an aggregation. Specifically, an aggregation belonging to the Page control.
Let's take a step back and look at how controls are structured. We'll take the sap.m.Page control as an example. This is what we see when we look at the sap.m.Page control's constructor documentation in the API reference:
We see that a control can have Properties, Aggregations, Associations and Events. In this example, what we're interested in are Properties and Aggregations. By now you will probably have worked out that Properties of a control are declared using XML attributes (title="Greetings", for example). And as properties don't start with an uppercase letter, neither do the corresponding attribute names.
So that brings us on to Aggregations. An aggregation can be thought of as a collection of zero or more 'child controls'. Perhaps one of the most common aggregations is in a sap.m.List control, where the entries in the list are, say, sap.m.StandardListItem control children in the 'items' aggregation. Note that in the aggregation definition, the type of controls that can be contained can be restricted. In the case of the List's items aggregation, the type sap.m.ListItemBase is specified:
As sap.m.StandardListItem inherits from sap.m.ListItemBase, it is a valid control to be contained in the sap.m.List's items aggregation.
So, back to "<footer>ā¦</footer>". Guess what? Yes, this XML element, with a lower case initial letter, represents an aggregation. You can see from the documentation screenshot that the footer aggregation of the sap.m.Page control expects a single control - a sap.m.Bar. So that's what we have - a Bar. And in turn, the sap.m.Bar control has a triple of aggregations, representing content in the left, center and right of the Bar. In this case we want to put a sap.m.Button control on the right, so we use the contentRight aggregation. And the simplicity of what we want to do is in some part reflected in the simplicity of the XML:
<Page title="Greetings">
<Text text="Hello World" />
<footer>
<Bar>
<contentRight>
<Button text="Edit" />
</contentRight>
</Bar>
</footer>
</Page>
But wait. In the JavaScript version the sap.m.Text control is specified within the content aggregation of the sap.m.Page:
new sap.m.Page({
title: "Greetings",
content: [
new sap.m.Text({
text: "Hello World"
})
],
Where's the equivalent in the XML? There isn't. Or rather, it's implicit. The content aggregation is the Page's default aggregation, and as such, doesn't need to be explicitly declared in the XML view. Clean! You can include the "<content> ā¦ </content>" if you want:
<Page title="Greetings">
<content>
<Text text="Hello World" />
</content>
<footer>
<Bar>
<contentRight>
<Button text="Edit" />
</contentRight>
</Bar>
</footer>
</Page>
but you don't have to.
Armed with this knowledge, we're ready to examine the XML versions of the views in the W3U3 app I rewrote. We'll do that in the next post.
Until then, share & enjoy!
]]>If you remember back to the Login controller (described in the previous post in this series) we arrive at the ProductList view after successfully logging in, creating the OData model for the business available at the OData service, and performing a move from the Login page to this ProductList page with oApp.to("ProductList"), the navigation mechanism that is available in the App control, inherited from NavContainer.
Here's what the ProductList view looks like.
sap.ui.jsview("com.opensap.ProductList", {
getControllerName: function() {
return "com.opensap.ProductList";
},
createContent: function(oController) {
return new sap.m.Page("ProductPage", {
title: "Products",
content: [
new sap.m.List({
headerText: "Product Overview",
items: {
path: "/ProductCollection",
template: new sap.m.StandardListItem({
title: "{Name}",
description: "{Description}",
type: sap.m.ListType.Navigation,
press: [oController.handleProductListItemPress, oController]
})
}
})
]
});
}
});
Like the previous views, this isn't actually much different from the original version. I've left out stuff that wasn't needed, and in particular the icon property of each StandardListItem was pointing at the wrong model property name, resulting in no icon being shown in the list. I've removed the icon* properties as well as a couple of list properties (inset and type).
What I have done, though, mostly for fun, is to write the createContent function as a single statement. This in contrast to the multiple statements in the original, but perhaps more interestingly, the whole thing looks more declarative than imperative. This will come into play when we eventually look at declarative views in XML, which are actually my prefererence, and arguably the neatest and least amount of typing ā¦ which might surprise you. Anyway, more on that another time.
The ProductList controller is very simple; all it has to do is handle the press of the StandardListItem (see the press event specification in the view above).
sap.ui.controller("com.opensap.ProductList", {
handleProductListItemPress: function(oEvent) {
this.getView().getParent().to("ProductDetail", {
context: oEvent.getSource().getBindingContext()
});
}
});
Again, I've left out the empty boilerplate code from the original, and am just doing what's required, nothing more: getting the binding context of the source of the event (the particular StandardListItem that was pressed), and passing that in the navigation to the ProductDetail page.
Note that I've been sort of interchanging the word page and view here and earlier. This is in relation to the App control, which has a 'pages' aggregation from the NavContainer control. As the documentation states, you don't have to put Page controls into this pages aggregation, you can put other controls that have a fullscreen semantic, and one of those possible controls is a View.
So we've navigated from the ProductList to the ProductDetail by selecting an item in the List control, and having that item's binding context (related to the OData model) passed to us. Here's what the view looks like.
sap.ui.jsview("com.opensap.ProductDetail", {
getControllerName: function() {
return "com.opensap.ProductDetail";
},
onBeforeShow: function(oEvent) {
if (oEvent.data.context) {
this.setBindingContext(oEvent.data.context);
}
},
So in the ProductDetail view, where we want to simply show more detail about that particular Product entity, we first make sure that the passed context is bound (to the view).
createContent: function(oController) {
return new sap.m.Page({
title: "{Name}",
showNavButton: true,
navButtonPress: [oController.handleNavButtonPress, oController],
content: [
new sap.m.List({
items: [
new sap.m.DisplayListItem({
label: "Name",
value: "{Name}"
}),
new sap.m.DisplayListItem({
label: "Description",
value: "{Description}"
}),
new sap.m.DisplayListItem({
label: "Price",
value: "{Price} {CurrencyCode}"
}),
new sap.m.StandardListItem({
title: "Supplier",
description: "{SupplierName}",
type: sap.m.ListType.Navigation,
press: [oController.handleSupplierPress, oController]
})
]
}),
new sap.m.VBox({
alignItems: sap.m.FlexAlignItems.Center,
items: [
new sap.m.Image({
src: "{app>/ES1Root}{ProductPicUrl}",
decorative: true,
densityAware: false
})
]
})
]
});
}
});
Once that's done, all we have to do is fill out the createContent function, which again is very similar to the original. Note that here I'm using two model properties together for the value of the "Price" item to show a currency value and code.
In the original version, there was some custom data attached to the Supplier item - specifically the SupplierId property from the Product. This was used, in the controller, to manually (and somewhat "bluntly") construct an OData Entity URL for subsequent (manual) retrieval. Of couse, you might have guessed by now what I'm going to say. Not necessary at all. More on this shortly. But it's worth pointing out that the attaching of the custom data is quite a useful and widely available facility in general. It's widely available because it's part of the Element class, from which, ultimately, all controls inherit. So you can attach custom data in name/value pairs to any control you wish, more or less.
Finally, let's have a quick look at that VBox control containing the product image. I took a lead from the original app and decided to prefix the relative URL (which is what is contained in the ProductPicUrl property) with the generic (non-SMP-proxied) 'sapes1' URL base. And to achieve this prefixing I just concatenated a couple of model properties - one from the named "app" model (the ES1Root) and the other being the actual image relative URL.
Ok, let's have a look at the rewritten controller.
sap.ui.controller("com.opensap.ProductDetail", {
handleNavButtonPress: function(oEvent) {
this.getView().getParent().back();
},
handleSupplierPress: function(oEvent) {
this.getView().getParent().to("SupplierDetail", {
context: oEvent.getSource().getBindingContext()
});
}
});
As well as the back navigation, we have the handling of the press of the Supplier item in the ProductDetail view. This should take us to the SupplierDetail view to show us more information about the supplier.
So before we think about how we make this work, let's pause for a second and consider the business data that we're consuming through the OData service.
We have, in the OData service originating at https://sapes1.sapdevcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV/, a number of EntitySets, or 'collections', including the BusinessPartnerCollection and the ProductCollection - both of which have entities that we're interested in for our app. We start out with the ProductCollection, display a list, pick a specific product for more detail, and then go to the supplier for that product. If you look at the OData metadata for this service, you'll see that in the definition of the Product entity, there's a navigation property that will take us directly from the product entity to the related business partner entity. How useful is that? Yes, very! So let's use it.
Before we look at how we use it, let's review how the original app was doing things here to go from the selected product detail to the supplier. In the supplierTap function of the original ProductDetail controller, the OData.read function was called explicitly (ouch), on a manually constructed OData URL (ouch), which abruptly jumped straight to the BusinessPartnerCollection, ignoring this navigation feature (double-ouch). The supplier's ID (which had been squirrelled away in the custom data as mentioned earlier) was specified directly, as a key predicate, and a JSON representation was requested:
OData.read("https://sapes1.devcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV/BusinessPartnerCollection('" + supplierId + "')?$format=json", ā¦)
Yes, you can guess the next bit š The JSON data was passed directly to the next view, bypassing any semblance of OData model usage. Ouch. I guess this also bypasses the SMP URL rewriting security and should have really been the SMP-based URL. And ouch.
So how did we do it here? Well, just by passing the context of the selected product, as usual. Just like we did when we went from the ProductList view to the ProductDetail view. And then following on from that in the SupplierDetail view with a reference to the relative 'Supplier' entity.
Ok, so here's the view.
sap.ui.jsview("com.opensap.SupplierDetail", {
getControllerName: function() {
return "com.opensap.SupplierDetail";
},
onBeforeShow: function(oEvent) {
if (oEvent.data.context) {
this.setBindingContext(oEvent.data.context);
}
},
createContent: function(oController) {
var oPage = new sap.m.Page({
title: "{CompanyName}",
showNavButton: true,
navButtonPress: [oController.handleNavButtonPress, oController],
content: [
new sap.m.List({
items: [
new sap.m.DisplayListItem({
label: "Company Name",
value: "{CompanyName}"
}),
new sap.m.DisplayListItem({
label: "Web Address",
value: "{WebAddress}"
}),
new sap.m.DisplayListItem({
label: "Phone Number",
value: "{PhoneNumber}"
})
]
})
]
});
oPage.bindElement("Supplier");
return oPage;
}
});
This view looks pretty normal and doesn't differ much from the original. We have the onBeforeShow and the createContent. But the key line is this:
oPage.bindElement("Supplier")
At the point that this is invoked, there's already the binding context that refers to the specific product previously chosen, say, like this:
https:/sap/opu/odata/sap/ZGWSAMPLE_SRV/ProductCollection('HT-1007')
(I'm using the 'sapes1' link rather than the SMP-rewritten one here so you can navigate them from here and have a look manually if you want.)
Following the navigation property mentioned earlier, to the supplier (the entity in the BusinessPartnerCollection) is simply a matter, OData-wise, of extending the path to navigate to the supplier, like this:
https:/sap/opu/odata/sap/ZGWSAMPLE_SRV/ProductCollection('HT-1007')/Supplier
So in OData terms, we're navigating. And in path terms, we're going to a relative "Supplier", which is exactly what we're doing with the oPage.bindElement("Supplier"). The bindElement mechanism, when called on an entity in an OData model, triggers an automatic OData "read" operation, i.e. an HTTP GET request, and updates the model. Bingo!
Looking at the Network tab of Chrome Developer Tools, this is what we see happens:
The first call (ProductCollection?$skipā¦) was for the initial binding to "/ProductCollection" in the ProductList view. Then a product HT-1007 was selected, the App navigated to the ProductDetail view, and then the supplier item was pressed. And when the bindElement in the SupplierDetail view was called, this triggered the last call in the screenshot - to "Supplier", relative to ProductCollection('HT-1007').
All automatic and comfortable!
sap.ui.controller("com.opensap.SupplierDetail", {
handleNavButtonPress: function(oEvent) {
this.getView().getParent().back();
}
})
Let's finish off with a quick look at the corresponding controller for this view. It doesn't have much work to do - just navigate back when the nav button is pressed. And it's very similar to the original.
So there we have it. Embrace SAPUI5 and its myriad features (automatic module loading, well thought out controls, OData models, and more) and have fun building apps.
That's draws this series to an end. Thanks for reading. The link to the Github repo where the rewritten app can be found is in the original post in this series, and also here: https://github.com/qmacro/w3u3_redonebasic.
Share & enjoy!
]]>In the index.html, we instantiated the App view, which is a JavaScript view. The view has a corresponding controller and they look like this.
sap.ui.jsview("com.opensap.App", {
getControllerName: function() {
return "com.opensap.App";
},
createContent: function(oController) {
var oApp = new sap.m.App("idApp", {
pages: [
sap.ui.jsview("Login", "com.opensap.Login"),
sap.ui.jsview("ProductList", "com.opensap.ProductList"),
sap.ui.jsview("ProductDetail", "com.opensap.ProductDetail"),
sap.ui.jsview("SupplierDetail", "com.opensap.SupplierDetail")
]
});
return oApp;
}
});
This is actually not too far from the original version. However, it is much shorter, as it takes advantage of the pages aggregation property of the App control, and sticks the views straight in there. This is much quicker and neater than the slightly pedestrian way it is done in the original version. Also, there is no need to navigate explicitly to Login (this.app.to("Login")) as the first control in the aggregation will be the default anyway.
sap.ui.controller("com.opensap.App", {
onInit: function() {
sap.ui.getCore().setModel(new sap.ui.model.json.JSONModel("model/app.json"), "app");
}
});
The App controller is even smaller, and uses the onInit event to create the JSON model that will hold the data about the application connection (this was mentioned in the Index & Structure post).
Note that rather than having one single model in the app that holds all sorts of unrelated data, as it is done in the original version (there's a single JSON model for everything, and that's it), I am using setModel's optional second parameter, to specify a name ("app") for the model. This way it becomes a "named model" and is not the default (where no name is specified). You'll see later that references to properties in named models are prefixed with the name and a ">" symbol, like this: "{app>/ES1Root}".
The original App controller had empty onInit, onBeforeShow and navButtonTap events, which I have of course left out here (I'm guessing they might have come from a controller template and left in there).
So the App view is used as a container, that has navigation capabilities (it actually inherits from NavContainer); it doesn't have any direct visible elements of its own. Instead, the "pages" aggregation is what holds the content entities, and the first one in there is the one that's shown by default. In this case it's Login.
The Login view and its corresponding controller are somewhat more involved, so let's take a look at the rewritten version step by step.
The view itself is fairly self explanatory and doesn't differ too much from the original. There are however a couple of things I want to point out before moving on to the controller.
sap.ui.jsview("com.opensap.Login", {
getControllerName: function() {
return "com.opensap.Login";
},
createContent: function(oController) {
return new sap.m.Page({
title: "Login",
showNavButton: false,
footer: new sap.m.Bar({
contentMiddle: [
new sap.m.Button({
text: "Login",
press: [oController.loginPress, oController]
}),
]
}),
content: [
new sap.m.List({
items: [
new sap.m.InputListItem({
label: "Username",
content: new sap.m.Input({ value: "{app>/Username}" })
}),
new sap.m.InputListItem({
label: "Password",
content: new sap.m.Input({ value: "{app>/Password}", type: sap.m.InputType.Password })
})
]
})
]
});
}
});
First is the use of the press event on the Button control. The tap event (used in the original version of the app) is deprecated. You will see that throughout the app I've replaced the use of 'tap' with 'press'.
Also, note how the handler is specified for the press event in the construction of the Button: [fnListenerFunction, oListenerObject] (and it's the same in the original). This form allows you to specify, as the second oListenerObject parameter, the context which 'this' will have in the fnListenerFunction handler. In other words, doing it this way will mean that when you refer to 'this' in your handler, it will do what you probably expect and refer to the controller.
Then we have the construction of the values for the Input controls. Because I loaded the data about the application connection into a named model (to keep that separate from the main business data) I have to prefix the model properties with "app>" as mentioned above.
So now we'll have a look at the Login controller, and if you compare this new version with the original, you'll see that there are a number of differences.
sap.ui.controller("com.opensap.Login", {
oSMPModel: null,
As described in the course, this app needs to create a connection with the SMP server. The API with which to do that is OData-based - an OData service at the address
https: <your-id>trial.hanatrial.ondemand.com/odata/applications/latest/<your-app-name>/
and as you saw in this unit (W3U3) we need to perform an OData "create" operation on the Connections collection to create a new Connection entity. So to do this, I'm using a model to represent the OData service, and I'm storing it singularly in the controller - we don't need to set the model anywhere on the control tree, the create operation is just to make the connection and get the application connection ID (APPCID).
loginPress: function(oEvent) {
var oAppData = sap.ui.getCore().getModel("app").getData();
if (!this.oSMPModel) {
this.oSMPModel = new sap.ui.model.odata.ODataModel(
oAppData.BaseURL + "/odata/applications/latest/" + oAppData.AppName
);
}
When the login button is pressed we use the application data (from model/app.json, stored in the named "app" model) to construct the URL of the SMP connections OData service and create an OData model based on that.
this.oSMPModel.create('/Connections', { DeviceType: "Android" }, null,
jQuery.proxy(function(mResult) {
localStorage['APPCID'] = mResult.ApplicationConnectionId;
this.showProducts(mResult.ApplicationConnectionId);
}, this),
jQuery.proxy(function(oError) {
jQuery.sap.log.error("Connection creation failed");
// Bypass if we already have an id
if (/an application connection with the same id already exists/.test(oError.response.body)) {
jQuery.sap.log.info("Bypassing failure: already have a connection");
this.showProducts(localStorage['APPCID']);
}
}, this)
);
},
Now we have this SMP model, performing the OData create operation (an HTTP POST request), sending the appropriate entity payload, is as simple as
this.oSMPModel.create('/Connections', { DeviceType: 'Android'}, ā¦)
That's it. We just catch the APPCID from the result object and here we're storing it in localStorage on the browser. This is a small workaround to the problem with the original app where you had to delete the connection from the SMP Admin console each time. The failure case being handled here is where we are told that an application connection already exists ā¦ if that's the case then we just grab what we have in localStorage and use that.
Unlike the original app version, we're not interested in actually storing any results so there's no need to add it to the model. By the way, if you look at how the APPCID is added to the model in the original app version, there's a pattern used which goes generally like this:
var oData = sap.ui.getCore().getModel().getData();
oData.someNewProperty = "value";
sap.ui.getCore().getModel().setData(oData);
If you find yourself doing this, take a look at the optional second bMerge parameter of setData. It uses jQuery.extend() and it might be what you're looking for - it will allow you to simply do this:
sap.ui.getCore().getModel().setData({someNewProperty: "value"}, true);
Anyway, we get the APPCID back from the SMP's OData service and then call showProducts (below) to actually start bringing in the business data and showing it.
showProducts: function(sAPPCID) {
var oAppData = sap.ui.getCore().getModel("app").getData();
var oModel = new sap.ui.model.odata.ODataModel(
oAppData.BaseURL + "/" + oAppData.AppName,
{ 'X-SUP-APPCID': sAPPCID }
);
sap.ui.getCore().setModel(oModel);
var oApp = this.getView().getParent();
oApp.to("ProductList");
}
});
The showProducts function creates a new model. Yes, another one. This time, it's a model for the business data, available at the OData service that was described in the course and is proxied behind the SMP service. So first we use the application data in the "app" model to construct the proxy URL, which will be something like this:
https: <your-id>trial.hanatrial.ondemand.com/<your-app-name>/
But then notice that we don't do anything manually, unlike the original app. We don't specify the HTTP method (GET) and we don't make any explicit calls (like OData.read). We just create a new OData model, specifying the service URL, and an additional object containing custom headers that we want sent on every call. The header we want is of course the X-SUP-APPCID so that's what we specify. From them on we just let the model do the work for us.
What we certainly don't do here, which was done in the original app, is call OData.read (which, incidentally, doesn't store the returned data in the model), and then manually shovel the raw JSON (the OData comes back as a JSON representation) into a single, central JSON model. There's no need, and this is really mixing up different mechanisms: OData and its corresponding model, JSON and its corresponding model, and their respective ways of working.
So you'll see, there are no explicit calls (HTTP requests) made for the business data. And you'll see that this hold true throughout the app (e.g. also later when we navigate from the ProductDetail view to the SupplierDetail view, following a navigation property). And remember, as described in the Index & Structure post in this series, there is no explicit external OData library (the original app had brought in datajs-1.1.1.js as a 3rd party library) - the SAPUI5 framework takes care of this for you.
Ok, well that's it for this post.
See the end of the initial post "Mobile Dev Course W3U3 Rewrite - Intro" for links to all the parts in this series.
Share & enjoy!
]]>First, I'll take the lines of the new version of index.html chunk-by-chunk, with comments.
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>W3U3 Redone Basic</title>
There's an important meta tag that I added, the X-UA-Compatible one. This is to give IE the best chance of running SAPUI5 properly. Without this there could be rendering issues in IE. (Of course, the alternative is to stop using IE altogether, but that's a different debate!)
<script src="https://sapui5.hana.ondemand.com/sdk/resources/sap-ui-core.js"
type="text/javascript"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m"
data-sap-ui-xx-bindingSyntax="complex"
data-sap-ui-theme="sap_bluecrystal">
</script>
Here in the bootstrap tag I'm specifying the complex binding syntax, which I'll be using later on (in the ProductDetail view, to fix a problem with the product image URL). I'm also specifying the Blue Crystal theme (sap_bluecrystal), rather than the Mobile Visual Identity theme (sap_mvi).
<script>
jQuery.sap.log.setLevel(jQuery.sap.log.LogLevel.INFO);
jQuery.sap.registerModulePath("com.opensap", "./myapp/");
sap.ui.jsview("idAppView", "com.opensap.App").placeAt("root");
</script>
This is where you'll see the biggest change in this file. The open.sap.com course version has a ton of <script> tags (12, to be precise) to load every single file in the app, including some that aren't even necessary. This is simply ignoring the automatic module loading mechanism that is built into SAPUI5. The mechanism not only allows you to avoid a sea of <script> tags, it also allows you to organise your app's resources in a clean and efficient way, refer to them semantically rather than physically, and have load-on-demand features too.
Here, we're saying "modules that begin with 'com.opensap' can be found in the 'myapp' directory below where we are". And then we use that module loading system directly by asking for the instantiation of the "com.opensap.App" view (giving it an id of "idAppView") before having it rendered in the main <div> tag (see below).
Also note the use of the jQuery.sap.log.* functions for console logging. This abstracts the logging mechanism so you don't have to think about whether console.log works in a particular browser properly (yes IE, I'm looking at you again).
</head>
<body class="sapUiBody">
<div id="root"></div>
</body>
</html>
Instantiating the view, which has an associated controller with an onInit function, is also a better way, or at least a more SAPUI5-way, to kick off processing, rather than have a function referred to in the onload attribute of the <body> tag, as the open.sap.com course version of the app does. So this is how we're doing it here. The sap.ui.jsview call causes that view's controller's onInit function to run, rather than having an onload="initializeModel()" in the <body> tag.
This post is also probably the best place to cover the app's organization and files, so I'll do that here as well. First, I'll show you what the original open.sap.com version looks like, then I'll show you what this new version looks like.
Here we see first of all that the views and controllers are split into separate directories. This isn't wrong, it just feels a little odd. So in the new version I've put the view/controller pairs together in a "myapp" directory.
More disconcerting is the model/modeljs file. The fact that the file is in a directory called "model" suggests that it has something to do with data. But when we look into the file there is some data-relevant stuff (creating a JSON model and setting it on the core) but there's also some view instantiation and placement for rendering. This is not ideal. It's not a problem that the model file is a script (when you're building JSON-based model data manually it's often useful to be able to construct properties and values dynamically), but I do have a problem with the mix of concerns.
There's often a js directory when the app requires 3rd party libraries that have functionality that SAPUI5 does not provide. These three files (Base64.js, datajs-1.1.1.js and SMPCloudHTTPClient.js) do not fall into this category, and don't belong here. In fact, this whole directory and its contents is not required:
Base64.js is used in the original version to encode a Basic Authentication username / password combination. As you'll see, this is very manual and not necessary. The datajs-1.1.1.js library is a specific version of the OData library. SAPUI5 speaks OData natively and does not need an extra library; indeed, the inclusion of a specific version like this may clash with the one that SAPUI5 supplies and uses internally. The SMPCloudHTTPClient.js here is used to create a new client object that includes the Application Connection ID (APPCID) header on requests to the SMP server. As you'll see in an upcoming post that looks more closely at the use and abuse of the OData modelling in the app, you'll see that this is also not necessary.
As you can see, the rewritten version is smaller, doesn't have extraneous and unnecessary libraries and has the view and controller pairs in one directory ("myapp", referred to in the module loading mechanism in the index.html file earlier).
It also has a 'real' model file that just has data in it - app.json. This data is just information about the relationship with the app backend on the SMP server and is very similar to the intention of the original version.
So, that's it for the index and app organisation. Have a look for yourself at the original and new versions of the index.html and model files, and compare them alongside this description here.
See the end of the initial post "Mobile Dev Course W3U3 Rewrite - Intro" for links to all the parts in this series.
Until next time, share & enjoy!
]]>In the current open.sap.com course Introduction to Mobile Solution Development there are a number of SAPUI5 based example apps that are used to illustrate various concepts and provide learning and exercise materials. Unfortunately, these apps don't particularly show good techniques; in fact I'd go so far as to say that some of the approaches used are simply not appropriate:
I would class these as "must-change". There is an urgency of scale at work here as much as anything else; there are over 28,000 registered participants on this course, and it would make me happy to think that there's a way to get them back on the right path, SAPUI5-wise.
There are of course other aspects that are less "incorrect" with the app but neverthless perhaps better done a different way. I would class these as "nice-to-have". Examples are:
* both these things will become more important over time, starting very soon!
So I've picked a first app - the "MyFirstEnterpriseReadyWebApp" in Week 3 Unit 3 (W3U3) - and re-written it. I have addressed the "must-change" aspects, but left (for now) the "nice-to-have" aspects.
I stuck to the following principles:
These principles are so that any course participant who has already looked at the original app will feel at home and be able to more easily recognise the improvements.
I've pushed my new "Redone, Basic" version of the W3U3 app to Github so the code is available for everyone to study and try out, but also over the course of the next few posts I'll highlight some of the changes and describe the differences and the fixes, and the reasons why. Until then, have a look at the repo "w3u3_redonebasic" and see what you think.
Here are the follow on posts (links inserted here as I write them) dealing with the detail of the rewrite:
Share & enjoy
]]>Have I got the session for you!
CD168 "Building SAP Fiori-like UIs with SAPUI5ā³ - 2hr Hands-On
What is Fiori? Well, from the http://experience.sap.com/fiori site, this is a good start:
A collection of apps with a simple and easy to use experience for broadly and frequently used SAP software functions that work seamlessly across devices - desktop, tablet or smartphone.
What is Fiori built upon? You guessed it, SAPUI5. And specifically, the sap.m library (and a sprinkling of other more general controls). Fiori is part application set, part responsive design, part look and feel, and part state-of-mind. What makes a Fiori app? Well, amongst other things, it's the use of certain controls and design to achieve the consistent experience you might have already come to expect from Wave 1, which delivered the 25 ESS/MSS apps. Wave 2 is coming soon, and set to deliver many many more apps across a broader functional range.
I have the tremendous privilege of working with the SAPUI5 teams in Walldorf currently and know first hand the massive effort (love, intelligence and dedication) that has gone into building the foundation for Fiori. And for me as a developer, it's very important to understand what's going on under the Fiori hood, not only for building new apps but for supporting my customers.
So enter CD168, a two-hour hands-on session at SAP TechEd titled "Building SAP Fiori-like UIs with SAPUI5". If you want to find out how to build SAPUI5 apps with a Fiori flavour, to get a feel for what controls to use and how to use them, then you should seriously consider attending.
An intensive hands-on set of an introductory walkthrough of the developer toolset and environment, and then a series of ten exercises taking you from this to this:
where along the way you learn about databinding, localisation, resource models, XML-based views, formatter functions, best practices for application design & build and controls such as the responsive SplitApp, ObjectHeader & ObjectListItem, IconTabBar, SearchField and others.
The session is available at SAP TechEd Las Vegas, and also in Amsterdam where along with esteemed SAP colleagues, I am honoured to be co-presenting.
If I had not had the chance to contribute to the session materials and co-present, I would certainly have this session at the top of my must-attend list.
What about you? Get yourself along to the session site and find out more:
CD168 on Wed 06 Nov 17:00-19:00
CD168 on Thu 07 Nov 14:30-16:30
Perhaps see you in Amsterdam?
I recorded a screencast of the "end result" app that the participants will build:
I have put the source code to the app that features here and in the CD168 session as a repo up on Github: SAPUI5-Fiori. I've put some notes in the repo's README (displayed on the repo's homepage) - please read them for more info. They have links to this blog post, and also to the screencast walkthrough of the app which is here:
This screencast is part of a 2-part SAP CodeTalk session with me and Ian Thain on SAPUI5 and Fiori, the playlist for which is here: SAP CodeTalk - SAPUI5 and Fiori.
]]>If youāve got a sheet and want to consume that from a web app, for example, via JSON or JSONP, or just want a different way of getting data out of a spreadsheet for further processing in the environment of your choice (that has a JSON parser) then this could be useful for you.
The idea is that you have a base URL and append a query string, supplying values for two parameters: the id of the spreadsheet and the name of the sheet within the spreadsheet. For example, for this sheet, the value for id
would be
0AuAssa05Fog5dGc5WVNRbFZDcWJCLVY2V2NidWFKeXc
and the value for sheet
would be
Sheet1
The exposure is via a Google Apps Script, which uses a couple of Apps Script APIs from theĀ Spreadsheet and Content services. The script, SheetAsJSON, runs as a web app, which puts a few requirements on the script itself.
It must implement a doGet method (for HTTP GET). It must be versioned (only versions of scripts can be deployed):
It must also be deployed as a web app and made available for others (or just yourself) to execute:
As you can see in the above screenshot, you also need to make sure the script is authorised to run. See the Google Apps Script documentation for more details.
The script source is here, and contains just a handful of functions which Iāll briefly describe here.
doGet: Creates a text output object, grabs the id and sheet query parameter values, opens the spreadsheet, reads the header and data records (via readData_), setting them as an array of objects in the data['records'] element, and then returns the content, pausing for a second only to work out whether JSONP or JSON was required, and returning the content and the content-type appropriately.
readData_: Goes and reads the header row via getHeaderRow_, and then the data rows via getDataRows_. It replaces any whitespace in the header values with underscores.
getHeaderRow_: Grabs the first row of the sheet, with the intention of treating the content of each cell in that row as the property names of the data objects.
getDataRows_: Grabs the rest of the rows of the sheet, creating JavaScript objects with properties one for each column.
Yes, itās perhaps over-simple in places, but it works for me, and may work for you too.
So with that in mind, letās say we use the "Typical Spreadsheet" shown in the screenshot above, and take its id and the name of the first and only sheet. When we append the query parameters appropriately onto the web appās URL for this particular instance of the script (mine), we get:
https://script.google.com/macros/s/AKfycbxOLElujQcy1-ZUer1KgEvK16gkTLUqYftApjNCM_IRTL3HSuDk/exec?id=0AuAssa05Fog5dGc5WVNRbFZDcWJCLVY2V2NidWFKeXc&sheet=Sheet1
which will return this:
Note that thereās a redirect which means the final URL you see in the URL bar is not the one above. Note also that the formatting in my browser is down to the great Chrome extension "JSONView".
Share and enjoy!
]]>Of course as you may know Aviad, who organised the whole event, was announced as one of the new SAP Mentors today. Congratulations! I did have the "insider knowledge" yesterday, hence the double meaning of my flying the @SAPMentors flag tweet yesterday evening on the roof terrace of the SAP Labs building š
It was a mind stretchingly great time, in the form of a Customer Corner Event which saw attendees from Danone, Coca Cola and Bluefin (me). Aviad had prepared a full agenda for Day 1, which included HANA related customer stories, roundtable discussions on various deeper-dive topic such as UI Integration Services, Portal-esque features for a solid integrated UI/app strategy, HANA XS, River, OData and more. There was also a panel discussion that covered topics such as cloud, HANA adoption, performance tuning, and "Kindergarten code" (yes, that phrase will stick!).
The second (half-day) gave me a chance to get a deeper dive look at some of the things we'd covered in Day 1. Specifically I'm thinking of SAP HANA Integration Services and River. There's too little time before my flight to cover these topics decently, so I'll keep it short for now and encourage you to take a look yourself, either in the existing docs or look to TechEd and beyond for the amazing River features. Not long to wait!
You've built a few SAPUI5-based apps. You're also looking at SAP Fiori. But have you thought about your overarching UI strategy? Moving from inside-out to outside-in based development isn't just about building great apps for multiple runtimes. It's about a consistent experience, role-based access to apps, common services (e.g. persistence), tools for non-developer/user roles such as designers and administrators, and the ability to give your users a unified entrypoint to all this.
So you should be looking at a "frontend server" that might consolidate Gateway foundation/core, customisation and the repository for SAPUI5 runtime artifacts. The SAP HANA UI Integration Services provides the API layer as the foundation for this "unified shell", and is a very viable option (running on HANA) for such a "frontend server". Plus you get the toe-in-the-water benefits of a HANA system ready for trialling and experimentation. Stick this in the cloud and you're onto a winner.
Oh, and want portal-style ability to define multiple apps in the same 'site', communicating with each other via publish/subscribe, using an open standard? Add OpenSocial to the mix and you've got it. And they have. Define a simple SAPUI5 component with an OData model connection to a live backend service, have the data presented to you in that widget in familiar "Excel" format (rows and columns), and then pipe that data into another graph widget via pubsub. Excellent.
River as a concept and in various concept/demo forms has been around for a while, dating back at least to 2010 when I got to see a demo at the Innovation Weekend before the SAP TechEd 2010 event in Berlin. Jacob Klein wrote more recently about River, and was at the deep dive today at SAP Labs.
What I saw today bowled me over. Imagine a JavaScript-like, part-declarative, part imperative language where in Eclipse, in one breath, if you want to, you can:
define your data (entities, properties, relationships, etc) write procedural code in the form of functions to provide custom logic (when a simple entity read/write, for example, is not enough) declare authorisations and roles and then, seconds later, use a table/column style UI still within Eclipse to create random / test data for the entities you've just defined, navigate those entities and jump between them via the relationships you also defined, and oh by the way have all the runtime artifacts generated automatically for you in the HANA backend. Further, the whole thing is exposed as an OData service with the appropriate entities, entitysets, associations, enumerations and function imports (I think you can guess which of these related to your data definitions in River).
Within the procedural code you can access any HANA-based data, or via adapters, reach out remotely to your (say) ABAP stack-based systems too. And yes, (I asked, and they showed me live) you can debug and single-step through this too. Debugging directly in Eclipse or triggering it via setting a header in the HTTP request from outside.
Rather impressive stuff.
So unfortunately I have to go and catch my flight (and find somewhere to sleep!). It was a pretty awesome (and packed) time and I was totally privileged to have been able to take part. Thank you all for having me!
]]>From the show and tell and judging on the weekend, here's a quote from one of the kids during his team's presentation to the judges, to explain their use and choice of data sources and backend systems:
"
In my tweet I alluded to the fact that this was a sentiment echoed by all the participants at YRS - the kids building cool hacks on open data and sharing the source code are our future.
What are we doing to help form and guide this future? Well for a start, there are a great number of people who get involved with this sort of thing on a regular basis. John Astill for example took part in a "hyperlocal" instance of YRS - at YRS NYC, last month. And of course, for the second year running, SAP itself, through the guidance and steady hand of Thomas Grassl is headline sponsor, helping make the whole thing happen (thank you SAP, I'm proud to have been able to connect you with YRS in the first place, last year!). Ian Thain was there also and wrote up a piece on YRS this year too: SAP and the young Developers of tomorrow.
As well as YRS there are other initiatives, regular events in the UK that take place. I wrote about what I've been involved with in a post:
Computational Thinking and Kids - A Year in Review
In SAP's continous re-invention of itself, it is getting involved more and more in embracing a wider audience, engaging with those kids and students who are our future, and reaching out more broadly then ever. For this I applaud them. Yes, there are corporate goals and useful side-effects, such as bringing more developers closer to an SAP flavoured platform, and increasing the chances of SAP software longevity, but those side-effects have very real benefits in helping teach computational thinking and prepare our youngsters for a data-driven future.
If you're interested in this and more besides (such as the InnoJam and University Alliance initiatives), then watch this space - there will be a public SAP Mentor Monday event in September to cover these subjects. Hope to see you there. In the meantime, please let us know in the comments what you think, and what it might take for you to get involved too. Believe me, it is hugely rewarding as well as great fun.
]]>Anyway, I was curious and so put together a spreadsheet tracking all of the 2013 hacks that had declared a Github repo in their information pages. You can see that there are a massive number of kids not only hacking code but sharing it with the world:
(see the interactive graph here: [http://www.pipetree.com/~dj/2013/08/yrs2013/commits.html](http://www.pipetree.com/~dj/2013/08/yrs2013/commits.html))I wrote some Google Apps Script to poll the Github API, pulling commit info, and writing it to a Google spreadsheet.
Iāve also made the data available as JSON (again, using the power of a little Google Apps Script), as I know that you can do a lot better than me visually. The data is here:
http://bit.ly/YRS2013HacksOnGithub
so please be my guest and put some more visualisations together. Letās see who can come up with the nicest representation this weekend.
The Google Apps Script source code that I used for this is available via a couple of gists on Github:
https://gist.github.com/qmacro/6199968 ā retrieve and store commit counts
https://gist.github.com/qmacro/6199973 ā expose a sheet as JSON
Share & Enjoy!
]]>Finding myself sending birthday greetings to an inanimate, virtual object is a little odd. Am I going mad? Well, no madder than usual, but certainly older.
I can't believe it's been ten years since SCN was born. I was around at the time, and having played a part in pre-SCN SAP communities (which I wrote a little bit about in this post from 2005: The SAP Developer Community 10 Years Ago) I was honoured to be asked to help prototype "a new online community for SAP developers". This was a collaboration between [O'Reilly Media](https://oreilly.com, for whom I've written a couple of books, and SAP. Chief SAP Mentor herder and all round superstar Mark Finnern was involved too, and it was with Mark and also my old friend and partner in code-crime Piers Harding that I began to fill the fledgling community with content.
The content was sometimes controversial - I love the fact that when publishing this particular document - Real Web Services with REST and ICF - SAP put a disclaimer at the top saying I wasn't speaking for them and (effectively) they didn't fully agree with what I was saying š.
But the content was there, and I wrote my first blog post "The SAP/MySQL Partnership" in this community in May 2003. It was the second blog post ever in this community (Mark wrote the first one), and the first blog post from a non-SAP employee.
The SAP Community Network has grown beyond what I think anyone could have imagined; it's the foundation for SAP's social, developer and outreach activities and has become a universe unto itself. Sometimes that's a bad thing, but most of the time it's a good thing. My favourite moment in SCN's lifetime is actually from way back, in the days when SAP had initially decided to keep all the content behind an authentication firewall. That wasn't good for the Web, and it certainly wasn't good for SCN. It was marooned on a small island, with no future. After a small campaign (I went to Walldorf and spoke at a meetup - slides are here: An Outsider's View of SDN), we got SAP to change their mind and now SCN is a part of the Web, indexed properly by Google, and it's a much better place for it.
So once again, congratulations to SCN on reaching 10 years, and exceeding all expectations. It's mostly down to a great number of heroes at SAP too numerous to mention, plus great community leaders and players like you.
Keep it up!
]]>See the project on Github: https://github.com/qmacro/sapui5-chrome-icon.
Share and enjoy!
]]>After registering my interest as a volunteer last year, I didnāt get round to contacting a school until December, where I approached Woodhouses Primary, as it was in the village where I lived. Iād got my CRB check done back in September, via STEMnet, as advised by CodeClub, and that was a very straightforward process. In fact, Iām as much involved as a STEMnet Ambassador now as I am a CodeClub leader. Definitely worth looking into!
The CodeClub website had some great resources to help build a case to put to a school, and good notes for volunteers on what to do, what to expect and even ideas on what to say (if you were unsure). I arranged a meeting with the Headteacher and Year 6ās Form teacher, and as soon as I explained what it was, how it worked, and that it was free, the deal was done. The following week I stood up in front of Year 5 and Year 6 children with some simple slides and explained what programming was all about. At the end, when I asked who might be interested in becoming a member of an after-school CodeClub, there were a lot of raised hands!
The school decided to restrict the availability to Year 6 children only, because of sheer numbers, and to give the Year 5 children something to look forward to! After all was said and done, I ended up with a total of 13 CodeClubbers.
Here are some bite-sized thoughts on my experience so far.
One final thought: If youāre wondering whether to take the plunge and become a CodeClub volunteer, just go for it. The support is great, the community is growing, the time logistics will sort themselves out, and the rewards are unlimited. Go for it!
]]>Lovibonds. Why does that name ring a bell? Apart from representing a long tradition of brewing, which we'll dive into in a moment, it is also a name well known throughout the brewing and food science industries, as the surname of the inventor of the Tintometer. Used in brewing and many other industries, the Tintometer is a device for measuring and classifying liquids by colour, and was invented in the 19th century by Joseph William Lovibond. The Tintometer classification and colour scale system is still used today by brewers to buy their malt. (You'll no doubt be pleased to know that, according to the Tintometer Group website, their modern digital water test equipment and liquid colour test instruments are, ahem, waterproof.)
But before we go any further, let's get to the bottom of the Lovibonds name as it applies to the brewing company. Jeff Rosenmeier is the current proprietor and owner of Lovibonds Brewery. Originally from the States, and a software engineer (there's hope for me yet!), Jeff came to Henley-on-Thames, already bitten by the brewing bug and looking for a site for his expanding brewing ventures, came across the site that had originally belonged to John Lovibond & Sons Brewers and Merchants, and took over the name. Yes, that Lovibond. John Locke Lovibond was the father of Joseph William and three other sons who set up a brewing partnership in 1872. So the name Lovibond is almost literally steeped in brewing history and science.
Transitioning into its second incarnation, the Lovibonds brewing respect has only grown. With a handful of year-round brews, limited releases, specials and prototypes, the quality of their beers is becoming well known. I'm here at Port Street Beer House with a Dark Reserve Nr 3 in my glass. The person next to me has already picked up the aroma of bourbon. Jack Daniels, to be precise. This is a porter aged in Tennessee whiskey barrels. Dark brown with a brief tan head, you can almost sense the wet wooden barrel innards, imparting vanilla, nuts and raisins. The body is not as heavy as one might expect, and along with the malty mouthful there's a dark chocolate and bitter, almost sour finish, with some brown sugar sweetness towards the bottom of the glass. It's a strong one ā at 7.4%, but the sample disappeared fairly quickly and it didn't feel like I was drinking something that potent. The sourness and relative lightness definitely added to the appeal, and the drinkability.
Port Street Beer House is currently running a Festival Of Britain(s Beers) and have brought together a great collection of British brewing talent. Lovibonds is a worthy member of this collection, and have earned their place at the taps with this excellent brew. The Festival is on until this Sunday 7th April, so get yourself down there before this Dark Reserve is gone. Quick!
Now itās a new month, and a new challenge. Although I donāt see it so much of a challenge, but something I want to use the challenge mechanism to complete. In my activities with CodeClub and the MadLab U-18 CoderDojo activities Iāve a refreshed interest in coding at the core, and have been looking at and using Scratch and Python in earnest. And in wondering and discussing how to present approaches to coding, and in particular some Python idioms (for example, see this question on āpedestrianā vs āfunctionalā approaches), Iām developing a keen interest in the functional programming features in Python. I was particularly taken by this video: http://www.youtube.com/watch?v=EnSu9hHGq5o and did some more research, combining what I was learning with the functions map, filter and reduce, also of course available in many other languages.
All roads seemed to lead to itertools, a Python library that āimplements a number of iterator building blocks inspired by constructs from APL, Haskell and SMLā. So this month Iād like to investigate the functions in this library, one at a time. Iām not sure what that investigation will look like, but I know Iād like to have a look at each one in turn, find examples of how they might be used, and write a little bit about them. The writing part is interesting; I felt that this WordPress-based blog was slightly too formal (and cumbersome?) for what I wanted, and hankered after a Wiki-based environment, with minimal edit friction and the ability to build document and page structure relationships dynamically. Iād had one on this host a (very) long time ago, called āspaceā*, and it was a MoinMoin powered one. Thatās Python-based, and it served me well, so Iāve just installed a new instance. Iāll use that to help me on my itertools voyage of discovery. Wish me bon voyage!
* Gosh, there are a few references to āspaceā still around in code, such as here!
]]>The pace of innovation is not slowing. In fact it is accelerating -- rather like an Osborne 1 might do if you threw it out of the window of Larry Ellison's Gulfstream Jet. Is there a terminal velocity for innovation? I'm not sure. But while the rareified atmosphere is a boost to the Osborne 1's progress, there's a danger back down here on terra firma that a similar rareified atmosphere will hinder the progress of innovation: IT skills. Or rather, their waning nature.
I was fortunate; as a schoolboy, the simple, cryptic prompt on a teletype held me in a vice-like grip of fascination that has never let go. I didn't need to be taught IT skills; I taught myself, spurred on by wonder and novelty, in a time when you could devour almost everything that popular computing at the time had to offer.
Today, children in the UK are unlucky in the sheer abundance, the omnipresence of computing machinery. Laptops, games consoles, smartphones and programmable LEGO. And what do we as a nation do? We teach them how to use Microsoft Word and Excel, we place importance on the ability to turn out a well formatted letter, or the skill in navigating the complexities of Excel's myriad functions. In and of itself that's not a bad thing, except when that's the only thing that's taught.
What are we building? A nation of users? We should be building a nation of builders! Of makers! Is our destiny to be the IT service industry isle par excellence? Because that's the way we're headed if we're not careful.
Our nation has been one of innovators, of inventors, of leaders. If we continue to use the ICT education opportunities to teach our children how to do slide transitions in Powerpoint, how to put headers and footers on documents or how to plot a pie chart from a series of figures, how does that stack up for the future? Instead of teaching children to attain basic computer driving licences, how about teaching them something that will give them a better chance to both understand and - more importantly -- shape the world of computing, which has an ever increasing sphere and relevance to industry today.
Computational thinking is a term that I was first introduced to by Jon Udell. It encompasses logical thinking, precision, creativity and rigour, and embodies all that we should be teaching our children for them to grow up capable and ready for the IT age. Not as people who know how to put together a forecast and graph it, but as people who understand how systems work, how to take advantage of the data tsunami that's coming our way, and, crucially, how to stay in control.
A recent (Feb 2013) Department for Education study "Computing - Programmes of study for Key Stages 1-4" examines what a high quality computing education looks like, describes aims and attainment targets, and sets out subject content across Key Stages 1 to 4. This study resonates well with the ideas of computational thinking, and describes the aim of the National Curriculum as ensuring that all pupils can understand and apply the principles of logic, algorithms, computational analysis, and at the same time can be creative and confident in their approaches.
So with the desire to share my interests and passions, and having in mind the the concepts of computational thinking, I joined CodeClub as a volunteer, and am about to start our local primary school's first after-school programming club with Year 6 children. The current CodeClub curriculum is based on Scratch, which is a great learning environment for programming, in more ways than I initially imagined (Scratch itself, interestingly, is based on Squeak). Furthermore I have become a STEM Ambassador, and my role in the Greater Manchester area is currently speaking to pupils on IT, helping schools shape their computing curriculum, and showing them how to take advantage of recent innovations such as the Raspberry Pi.
I display my STEMNET and CodeClub links proudly on my Bluefin Solutions email signature. It reminds me of my past, and of my future. What about our future? What about the future of our children's education and careers, and therefore also of our industry? If nothing else, I hope this post has made you aware of the gap between what our children are being taught and what they really need to know, and aware of the organisations that exist and are trying to do something about it. Wish me luck - I'm off to play my small part in helping build tomorrow's builders.
This blog post was first published on 14 February 2013. Since writing this, I had the opportunity to speak on this subject at a TEDx event ā TEDx Oldham, in Oct 2013. The talk was recorded and is available on YouTube - Our Computational Future: DJ Adams at TEDxOldham.
I uninstalled my Twitter clients (Tweetdeck for Chrome and Tweakdeck on my Android phone). I didnāt log into them at all. I was still authenticated with Twitter on the website* and visited twitter.com a couple of times to check something. No interaction, discussion or link saving, though.
So Iād say it was a big success. I did miss the interaction quite a bit ā I missed the community of friends and colleagues (especially the SAP community) who have a big presence there. I missed the interaction and the sometimes thought provoking discussions. But on the flip side, I did read a lot more; in other words, I used the time that I might otherwise be staring blurry-eyed at the columns of tweets, and read a lot of stuff Iād Instapaperād. It was great.
In the latter half of the month Iād more or less forgotten about Twitter and had got to grips with Google+. Itās still no replacement for Twitter (mostly because of the people) but itās a great platform ā indeed, a social backplane ā and Iāll continue to spend more time there.
I know the idea of the 30 day challenges was not to attempt a full year of āno this, no thatā; rather, there are some ādo more of this, start doing thatā elements too. But for February Iām attempting to drink no beer. That will indeed be a challenge, as Iām a big fan of craft beer, as many people know. Itās not that Iām avoiding alcohol altogether; Iāll allow myself a glass of wine here, a dram of whiskey there. Itās the beer that will be absent. As I drove out of work last night Iād mentioned to the inimitable Jamal Walsh that I was embarking on this challenge, and having just finished his dry January. gave me a tag-team style high five. Wish me luck!
]]>Here's a few examples of the bits and pieces that are contained in this repo:
Please have a browse, clone the repository, try the examples and snippets out, and feel free to contribute too!
The repo is here: https://github.com/qmacro/sapui5bin
Share and enjoy!
]]>On reviewing the exercise text today, I noticed this bit:
The URL that pointed to the SAPUI5 Shell that contained the code to copy-n-paste had a query string in it, and that sub=3.1 caused the browser to go straight to a subitem in the Shell's workset item collection:
(please ignore the 256-colour quality of that shot)
I thought that was a nice touch, and dug around to see what they'd done. I was pretty certain I hadn't seen that as a feature described in the official and comprehensive SAPUI5 docu, so was curious.
What they'd done is added a small function to parse out the value of the "sub" parameter in the query string, and set the selected workset item accordingly. Here's that small function, getURLParameter
:
function getURLParameter(name) {
return decodeURI(
(RegExp(name + '=' + '(.+?)(&|$)').exec(location.search)||[,null])[1]
);
}
(In case you're wondering, the ||[,null]
bit towards the end just makes sure there's no exception when the requested parameter isn't found by the regex).
The workset items were defined in the Shell object like this:
oController.oShell = new sap.ui.ux3.Shell("myShell", {
appIcon : "./images/sap_18.png",
appIconTooltip : "SAP",
appTitle : "CD163 Exercise Templates",
showInspectorTool : false,
showFeederTool : false,
showSearchTool : false,
content: html21,
worksetItems: [new sap.ui.ux3.NavigationItem("NI_2ā³,{key:"ni_2ā³,text:"Exercise 2", subItems:[
new sap.ui.ux3.NavigationItem("NI_2_1ā³,{key:"ni_2_1ā³,text:"2.1 Hello World"})]}),
new sap.ui.ux3.NavigationItem("NI_3ā³,{key:"ni_3ā³,text:"Exercise 3", subItems:[
new sap.ui.ux3.NavigationItem("NI_3_1ā³,{key:"ni_3_1ā³,text:"3.1 Simple OData"}),
[ā¦]
and the requested workset item was set as selected, with the corresponding content, in a large switch statement like this:
switch (getURLParameter("sub")){
case "2.1":
oController.oShell.setSelectedWorksetItem("NI_2_1");
oController.oShell.setContent(html21);
break;
case "3.1":
oController.oShell.setSelectedWorksetItem("NI_3_1");
oController.oShell.setContent(html31);
break;
case "3.2":
oController.oShell.setSelectedWorksetItem("NI_3_2");
oController.oShell.setContent(html32);
break;
[ā¦]
Nice effect.
I wanted to confirm this for myself, and try to use fewer lines of code than SAP had done, as it looked a little bit verbose. So I wrote a little standalone snippet, available in my Github repo 'sapui5bin'. The snippet is https://github.com/qmacro/sapui5bin/blob/master/SinglePageExamples/shell_wsi_nav.html and defines a Shell with a few items / subitems like this:
var oShell = new sap.ui.ux3.Shell({
appTitle: "Shell WorksetItem Navigation",
worksetItems:[
new sap.ui.ux3.NavigationItem("id_wsiA", {key:"wsiA",text:"A",subItems:[ ]}),
new sap.ui.ux3.NavigationItem("id_wsiB", {key:"wsiB",text:"B",subItems:[
new sap.ui.ux3.NavigationItem("id_wsiB.1ā³, {key:"wsiB.1ā³, text:"B.1"}),
new sap.ui.ux3.NavigationItem("id_wsiB.2ā³, {key:"wsiB.2ā³, text:"B.2"}),
]}),
]
});
Note that my convention is to name the IDs the same as the keys, but with a prefix of id_
.
I used the same getUrlParameter()
function, but then used a simpler method to dynamically set the selected workitem and content based on the value of the sub
query parameter, like this:
var subId = "id_wsi" + getUrlParameter("sub");
var wsi = sap.ui.getCore().byId(subId) ? subId : 'id_wsiA';
oShell.setSelectedWorksetItem(wsi);
oShell.setContent(getContent(wsi));
In the second of these 4 lines, I'm just making sure I handle the case where the user hacks the URL query string to specify a sub item that doesn't exist: I make sure that I can find an element with the ID specified (knowing that if it begins id_wsi
then it's a workset item) and defaulting to the first workset item if I can't.
And that's it. Not a huge deal, but I was intrigued, and thought you might be too. For more info, see the complete snippet shell_wsi_nav.html.
Share and enjoy!
]]>So my first 30-day challenge is to take a break from Twitter.
Iāve been toying with this idea for a while. Itās not an attempt to be online less and interacting socially less. The thoughts were initially triggered with the nosedive Twitterās standing took in the developer community when they poked a blunt stick in the faces of some of the very developers that helped Twitter succeed, by changing the API usage terms. But itās also to do with my interest in Google+, and wanting to see that platform and community succeed. I like the idea of Google+ and despite it only having a read-only API for now, Iām encouraged by the direction in which itās being grown.
Itās not going to be easy to take 30 days off Twitter, thatās for sure. Many of my friends and work colleagues hang out on Twitter and I get a lot of Instapaper fodder there too. But perhaps a move to Google+ will help me make my mind up once and for all whether itās a viable new platform to replace what I use Twitter for. Iām not expecting other people to make the move, or at least give Google+ a try, but if they do, thatās great.
Ok, so today I tweeted āIām intending this to be my last (non-automated!) Tweet for Januaryā. Off we go!
]]>Just over one week to go before Christmas, and the shoppers in Manchester are in full swing, dashing round town, bags in hands, pensive thoughts on faces. A few spouses are holed up here at Port Street Beer House enjoying some peace and quiet, and some great beers. A lot of darkness in glasses, the cold weather is properly upon us. Dark nights, dark ales. Heavy beers to warm us slowly from our core outwards. And with that context in mind, I'm about to have my personal and loose definition of ābeer' stretched a little bit.
Nestling at the bottom of one of the fridges behind the bar is a brown bottle with a black label and, with that label announcing "Kuhnhenn Brewing Company, 4th D Olde Ale, 13.5% ABV, aged for 9 months", quite possibly dark secrets. Sitting down at the table, I'm taken completely off track, although not unexpectedly. This old ale sits in the glass like liquid mahogany, barely a trace of head, and what head there was after pouring has quickly dissipated and become a tan ring round the glass. Before I even get the brew to my lips, my nose is hit by the heady raisin, rum and bourbon tones which are as sweet as they are boozy. This is very clearly a sipping beer ā hardly any carbonation, and very heavy. So I ready myself for the first sip ā¦ and it's like a Cadbury's Caramel heavily diluted with bourbon and black cherry liqueur. Gosh. The mouthfeel is just the same, a syrupy malt lacing that fades delightfully turning from obvious sweet to muscovado.
This Old Ale, properly called "Fourth Dementia Olde Ale", is from the Kuhnhenn microbrewery, in Warren, Michigan, a town due north of Detroit. Looking at the beers on offer, this dark, strong malty and caramel ale fits right in, with Imperial Creme Brulee Java Stout, Bourbon Barrel Barley Wine, Sticke Alt and Hairy Cherry coming from the same stable. That stable was originally a family run hardware store, and when faced with the prospect of losing out to a larger hardware chain that had moved into the area, the Kuhnhenn family, specifically two brothers Bret and Eric, decided to turn their home brewing know-how into a brewery business and reinvent themselves. It wasn't as unusual a transition as you might expect: Eric had been bitten by the home brewing bug at college, and selling home brewing supplies had eventually become a significant part of their hardware business. Having made the transition to a full brewing business, Kuhnhenn's is now a highly industrious eight barrel microbrewery.
The folks at Port Street Beer House have a knack for sourcing beer from passionate brewers, and this is no exception. If you're passing by, or wanting an escape from the cold outside (or the heat and chaos inside the myriad shopping areas) come in and spend a half hour getting to know this old and dark ale. You won't regret it. And you'll leave with a warm glow from within, like so many Ready-Brek kids of yore, but with a smile on your face.
I didnāt have any particular school in mind, but in any case, in order to have the chance to run a coding club, I needed to get a CRB check, and CodeClubās getting-started page directed me towards the STEM Ambassadors Programme where a lot of support for, and financing of the CRB check process was available.
I duly attended the STEM induction session at Manchesterās Museum Of Science & Industry (MOSI).Ā MOSI are the STEM network account holders for Greater Manchester and thereās a great team there. Since my induction, resulting in me and the other attendees becoming STEM Ambassadors, Iāve become more involved with STEM activities, recently helping schools and teachers learn about the Raspberry Pi and form course ideas around it.
There was a recent evening event held at Manchester University where teachers from schools all around the North West gathered with Manchester University staff, MOSI/STEM folks and STEM Ambassadors to help each other learn about the Pi, and also about the PiFace ā a shield with easy to use screw-terminal based physical interfaces for connections to and from the real world (Internet Of Things here we come!), invented by the Universityās own Dr Andrew Robinson. We covered Scratch- and Python-based programming with the PiFace. Great fun.
Based on connections I made at that evening event, I followed up with the Raspberry Pi theme to visit a couple of Sixth Form Colleges in Manchester: Xaverian, and Pendleton (part of Salford City College). I took a Pi with me, along with lots of cables, and a serial terminal (yes, that one). I also took a few slides that I put together to explore loop constructs in different languages, and finally a suitcase full of career memories. I wrote my visits up on the STEMnet site: Raspberry Pi and Manchester Colleges. Thereās also a news article on Xaverianās website about my visit: āA Taste Of Raspberry Pi!ā
Iām currently helping a Manchester school put together a Python programming course, based again around the Raspberry Pi. The whole STEM experience is rewarding and fun in equal measure. While I havenāt any spare time for a CodeClub club at the moment, Iām so glad I volunteered, as I probably wouldnāt have discovered STEM otherwise. Iām in contact with other CodeClub volunteers in Manchester ā weāre having a meetup later this month, and havenāt ruled it out. But for now, thereās plenty to do as a STEM Ambassador. Thanks CodeClub!
]]>Go back about a year, and I was learning from my friend and SAP Mentor colleague, the ever erudite Thorsten Franz, about MBTI types. In particular, I was trying to figure out the real difference between I (introvert) and E (extrovert) types. In one sentence, Thorsten made it really clear for me, and Iāve remembered the test ever since. He said āDo you recharge alone, or in a group?ā That nailed it. For me, if I have a day where I canāt get a quiet moment or three to myself to sort through my thoughts, I feel unbalanced, and the chaos and randomness of the day remains, rather than gets a chance to find the channel to flow evenly away. Yes, Iām happy in a crowd, happy presenting and waving my arms about. But I value time to myself and itās that time that I need to recharge.
So I can completely understand the reasons for the Silent Club. Emma has gone one step further than the recharging event, by making it possible, but not mandatory, to wrap oneās thoughts up into a short letter or postcard which can be sent to a PO box address, where she will scan them and publish them at http://thesilentclub.tumblr.com. If youāre the sort of person who values drawing a line under your thoughts, or need to have a specific end to the recharge cycle, then this sounds ideal.
I guess I can say that Iāve been a practising member of the Silent Club for a while, I just didnāt know it. So thank you Emma for forming something so neat, simple and well-formed around a subject that is close to my heart.
]]>Just now, I attended the SAPUI5 Q&A session with Tim Back and Oliver Graeff, where they presented a great overview of the libraries, tools and features of what is becoming an ever more popular platform for outside-in UI development. After all, it's almost policy at SAP to use SAPUI5 for development projects, where appropriate. ("Where appropriate" means in many circumstances except probably heavy power user application UI paradigms). One of the key features of SAPUI5, and in particular the DataTable controls, is the ridiculously easy consumption of data. In particular, data made available by Gateway, in the form of OData. Sure, as I've noted before, SAPUI5 can consume arbitrary XML and JSON too, but the data exposed in the related, resource-oriented fashion by Gateway, OData in other words, is where the magic happens.
Start with controlled definition of resources, and the relationship between them, done in your systems of record using the IW_BEP backend Gateway component building Model and Data providers, either manually or using the Service Builder. Then expose those resources and relations to your UI developers using the core Gateway components (GW_CORE and IW_FND). Then, you're off. Within no time you can start to see an application form around that data, with the right layer performing the right function with minimum friction. And that speed comes from the investment SAP has made in OData, an investment to make it all pervasive and all consumable.
So we know about Gateway being a key mechanism to expose OData for ABAP stack systems. Is there anything else? You bet. SAP HANA, full of data, can expose that data in an OData context. Use the magic of xsodata, create a definition marking a HANA table or view in a schema as an Entity, and boom you have a consumable OData service. And it doesn't stop there. There are facilities in the NetWeaver Cloud to produce OData too.
What does all this mean? Well, to me it means two things. The first thing is that it means that Gateway has already been a great success. It Just Works(tm). I recently completed a customer project which went live earlier this year, and Gateway was a key component in the integration architecture. And after setting Gateway up and defining our entities and the relations between them, we moved up a layer in the stack and never really had to work hard on Gateway at all. It did exactly what it said on the tin. We started to use, and reuse, entities that we'd defined, in building out the features in the consuming application.
The second thing is how important your investment in Gateway is. Embrace Gateway and by definition you're embracing OData. Before you know it you and your fellow developers are conversant in Entities, Entity Sets, Associations and Navigations (the relationships) - the building blocks of information in OData. And while this is a super end in itself, you're also setting yourself up to move out into the cloud, and across onto HANA. Have a look at the speed with which you can put together an app that consumes data supplied to it from Gateway. And then consider you're investing in that speed, and that speed across platforms.
]]>About me So, where do I start? Well, my name is DJ and I hack on SAP and related tech for Bluefin Solutions. I was born very young, in Manchester. Tim Guest has already mentioned a little bit about Manchester in his BIF post but I wanted to paint the picture a little more. Manchester is a city in the north west of England, and is often referred to as England's second city (after London). It's a fantastic place to live, a very in-your-face, matter-of-fact place that is confident and comfortable with its place in history. It's where the Industrial Revolution was born; the Industrial Revolution marked a major turning point in history and kickstarted what we now recognise today as major manufacturing, agriculture and transportation concepts. Manchester is where the first 'true' canal was constructed (the Bridgewater Canal) which opened in 1761 - canals became the arteries of early industry in the 18th century and helped moved goods and raw materials around the country. Manchester is also of course the birthplace of the computer, and is proud to call Alan Turing, the father of computer science, a son.
I grew up on a farm only a few miles from the centre of Manchester, in a village called Woodhouses. We bred pigs - about 400 at any one time - and cattle, particularly the Galloway breed. The rest of the village, at that time, consisted of other farms, and not much else save for a few houses. Life was simpler and growing up it was pretty idyllic when I think back. Later the farm moved on to producing pig food, boiling up all sorts of stuff and making such a stink that we got frequent complaints from the neighbours and the next town. The vats that we used to use were two stories high!
I left Manchester for London, to study Classics (Latin & Greek) at the University of London, and after graduating, started work for an oil company (Esso) in London (computing, of course). Since then I've lived and worked in many places, most notably Germany (Esso AG, Deutsche Telekom, SAP, and other places) but also Denmark, France and briefly in the US. But now I'm back in the village where I started. Douglas Adams, author, of course, of The Hitch Hiker's Guide To The Galaxy (and no namesake) once talked about "tiny invisible force tendrils" that tie every being in the universe to his birthplace, and I guess those force tendrils have eventually pulled me back to Woodhouses.
I got into computing at school, where we had a PDP minicomputer. Yes. This was 1977. In fact, an interview with me over on O'Reilly's Radar site explains more about this, so I'll point you there for more info. Suffice it to say I was completely mesmerised. This BIF initiative suggests I post a picture of myself or my home town, but instead, here's a "picture" of something that will be forever etched on my eyeballs (in a good way) - the "ready" screen of my first personal computer - the Acorn Atom. 8-bit 6502 processor, 2k RAM.
Now that's an interface! š (I only say this because I've just seen that Jon Reed has just tweeted to me about an article entitled "The Best Interface is No Interface" š
Anyway, to the questions! One from Marilyn Pratt, and the others from Matthias.
That was easy. I wanted to work for IBM. Simple. I loved the idea of big iron, the lure of the massive computing engines, and (in retrospect, perhaps) the wonders of the ivory towers. I'd cut my teeth on proper multiuser machines (PDPs) and so appreciated what IBM had to offer. And so I applied for a summer job (between university terms) and went to work at IBM in Sale, Greater Manchester (Jackson House, Washway Road, to be precise). My job there was to understand, devour and document a system written in CLIST, running on VM/CMS (yes, VM as in Virtual Machine. Decades old technology :-). Absolutely loved it. When I went to work for Esso after University, I was almost immediately knee deep in MVS/XA, 370 assembler, JCL and VSAM, on an SAP R/2 project. Like a pig in the proverbial.
I've participated actively in a lot of communities (including this one: I helped build SDN/SCN from the ground up, back in 2003):
and of course am proud to be an SAP Mentor.
What got me first started actively participating was an itch I needed to scratch. Back in the mid 1990s we SAP hackers were pretty isolated and I decided to form a mailing list for us to get together. This list was called 'merlin', and eventually merged with another list called 'sapr3-list', and the combined community eventually became SAP-R3-L. Read more about this history in the post "The SAP Developer Community 10 Years Ago" (that post itself was 7 years ago, gosh!). After starting the list, the community was formed and the itch became scratched, but I persevered with running it (it was a lot of work!) because the benefits far outweighed the efforts. The community, the sense of belonging to a group of people similar to you, and and knowledge gained and shared, was brilliant. That still holds true today.
Give X to a community, and you're likely to get X^2 back. If you're not already actively involved, give it a go!
Anyway, I guess that's more or less it for my BIF post; I'm running the risk of boring you all to pieces. So now I have the honour of blogging it forward to individuals whose work I try to follow as diligently as I can - Jason Scott and John Patterson both of whom I admire for their work in SAP development, integration and mobile areas. In addition to choosing one of the questions I've already answered, I'd like Jason and John to answer the following questions:
Thanks for reading!
]]>Haandbryggeriet ā what a mouthful, and we haven't even got to the name of the brew yet! Actually, when you break it down, this name is from two Norwegian words and simply translates to "Hand Brewery" ā in other words, an extremely small scale operation. Four guys, working on a voluntary basis, brewing by hand in a small building in Drammen, southwest of Oslo. At this scale, and with the enthusiasm that oozes from the pages of their modest website, it's clear that the brewers are fantastic amateurs, in the original, complimentary sense of the word ā working the brewery for the love of it. (If you're curious about this reclaiming of the word āamateur', read Paul Graham's essay "What Business Can Learn From Open Source" here: http://paulgraham.com/opensource.html.)
For a small operation, Haandbryggeriet has certainly produced a wide range of beers ā from a wheat stout called "Dark Force", through an Akevitt barrel aged porter, to a hop-free Gruit beer made with herbs, brewed as a guest beer in cooperation with the de Molen bewery.
Norwegian Wood is a Haandbryggeriet beer available at Port Street Beer House on tap, and is brewed all year round. It's a traditional Norwegian beer that has been recreated in memory of the farm brews that abounded when old laws required them to produce ale (farms were sometimes confiscated and went to the church and the king if they didn't). In fulfilling their requirements, the farms usually kilned the malt over an open fire, giving each brew a smokiness that has been recreated here. The brew was enhanced with the traditional spice for all Norwegian beer at the time ā juniper. The juniper spice comes not only from the berries themselves, but also from the twigs that are placed in the mash tun.
So many miles and years away from these traditional Norwegian farms, I sit here with a serving of Norwegian Wood. As I observe the hazy copper colour and the fading creamy head, there's an intense aroma of pine and smokiness. Not an unpleasant or strong smokiness, but something more subtle, akin to pipe tobacco. There's a taste of pine and a hint of cooked juniper berries, and rather than smoky, the flavour is more nutty and slightly sticky sweet, with an undercurrent of charcoal or cinder. The first sips also had a fruitiness about them but towards the bottom of the glass this had been replaced with a decent hint of malt that was very pleasant.
Haandbryggereit brews Norwegian Wood with smoked malt from Germany, along with other malts including crystal and chocolate. There's a wealth of aromas and flavours in a small glass of this traditional ale, and the smokiness is by no means the dominant feature. I wouldn't describe myself as a fan of smoked beer in the classic "Rauchbier" sense, but I definitely would order this again. With pine, hazelnuts, juniper and cinder in there, this beer is not only a mouthful to pronounce, but a very pleasant mouthful to enjoy.
One of my interests is retrocomputing, in particular, serial terminals. I had a great collection of them a while ago (VT320s, VT330+s, VT420s, and various Wyse models) all connected to multiple serial ports in the back of a Linux box (the serial signals actually running over Cat5 and re-converted at the patch-panel end, but thatās another story). Sadly I no longer have any of these terminals (another story again) but my friend Robert Shiels recently donated a Wyse WY-30 to me. Joseph and I decided to bring it along to the Raspberry Jam and do a show-n-tell on connecting it up to the Pi. The ultimate goal is to have a standalone serial terminal, good looking enough and retro enough to have in the living room, with a Raspberry Pi actually inside it, with a serial cable connection to connect, and a wifi adapter to hop on to the local network and from then on to the Internet. A silent, 80Ć24 green screen connection to life, the universe and everything.
Making the serial connection
First things first. The Pi has 2 rows of general purpose input / output (GPIO) pins at 3.3V (top left in the picture), but that means that we canāt use an RS232 serial connection directly as the voltage levels are too high. Rather than build or buy a converter, we used a simpler method. Most modern Linux distributions, including Debian Squeeze, provide support for USB serial ports, so getting hold of a USB serial cable was the first job. This connects to one of the USB ports on the Pi, and has a 9 pin D serial connector on the other end.
Booting up the Pi with the USB serial cable connection shows this:
raspberrypi kernel: usbserial: USB Serial Driver core
raspberrypi kernel: USB Serial support registered for pl2303
raspberrypi kernel: pl2303 1-1.3:1.0: pl2303 converter detected
raspberrypi kernel: usb 1-1.3: pl2303 converter now attached to ttyUSB0
Aha! ttyUSB0. This means we have a device handle that we can use.
Connecting the USB serial cable to the terminal wonāt work directly; we need to have the RX and TX connections reversed (so that RX sends to TX and vice versa). A handy null-modem cable will sort this out for us. So at this point we have Pi USB -> serial connector -> null-modem cable -> terminal.
Getting the login prompt
Connecting the terminal to the USB serial port is one thing; getting a login prompt on it requires more work. This is where the lowly āgettyā (from āget teletypeā) program comes in. Getty is from a long-gone era of physical teletypes and text terminals, and is used to manage these terminals by listening for a connection, displaying a login prompt, and running the login program to authenticate a user.
Getty needs to know a few things: what serial port to listen for a connection on, what speed the connection is expected to be at, and what terminal type the remote terminal is.
The invocation of getty we will use (as root) is this:
/sbin/agetty -L ttyUSB0 19200 wy30
(Iām actually using agetty here, an alternative getty program with some useful extra features).
This says: listen on ttyUSB0 for a connection, at baud rate 19200, donāt bother with carrier detect (i.e. force the line to be local), and set the terminal type to be wy30.
If youāve got the serial cable connection right, and youāve configured the terminal settings to be 19200 (at 8N1, i.e. 8 bits, no parity, 1 stop bit), you should see this on the terminal:
Wonderful!
One thing you probably want is to have getty listen out for a serial terminal connection all the time, from boot. To do this, add a line to /etc/inittab like this:
T0:23:respawn:/sbin/agetty -L ttyUSB0 19200 wy30
This means you can disconnect and reconnect on your terminal at will.
Adapting the terminal settings
Arguably the most common terminal standard is VT100. This came from the DEC terminals of yore, and through popularity became the de facto standard that OEM terminals emulated, and itās what software terminals such as PuTTY will emulate for you too. The Wyse WY-30 that we have has a terminal standard, or āpersonalityā which is āWY30+ā. It will also emulate TVI910+, TVI925 and ADDS A2. Not a VT100. So thatās why we specify āwy30ā² on the getty invocation.
But that specification wonāt work unless the Pi knows how to speak WY30, and for that, the terminfo database is used. Terminfo is a library of escape sequences for manipulating display terminals. Moving the cursor around, clearing the screen, that sort of thing. Termcap is a similar library that predates terminfo.
The Debian Squeeze distribution that was put together for the Pi doesnāt include the terminfo database, but a quick apt-get invocation later, and we have it:
sudo apt-get install ncurses-term
Now we have the wy30 entry in the database, as a file āwy30ā² in /usr/share/terminfo/w/.
With the file containing the appropriate escape codes to control a Wyse WY-30 terminal available in terminfo, and the specification of āwy30ā² in the getty call, we have all we need to start a productive session on the serially attached terminal.
Rebooting to check that the init spawning of getty is working correctly, and we can log in at our Wyse terminal and use tools such as top, vim, tmux and others that manipulate the screen, without problem.
Success!
]]>So to find out more about this framework that's been maturing for those 18 months, have a look at the DemoKit, the post announcing that SAPUI5 was going open source (OpenUI5), and the OpenUI5 home page. The developers amongst you ought to visit SAPUI5ās home on the SAP Community Network, where you'll find lots of content such as a series of posts from me covering an in-depth analysis and re-write of an SAPUI5 application: Mobile Dev Course W3U3 Rewrite.
But donāt go there just yet ā have a read of this post, which will put SAPUI5 into context for you.
Heard of SAP's "User Interface Development Toolkit for HTML5"? No? Thought not. How about "SAPUI5"? Ah, that's more like it.
SAP's User Interface Development Toolkit for HTML5 - aka SAPUI5 - is a very recent offering from SAP that, despite being an absolute mouthful when you use its official product name, is something that I suspect we will be hearing a lot more about in the next 12 months.
When you think of the SAP user interface experience, what comes to mind? The venerable SAPGUI? The edgy NetWeaver Business Client? Some browser-based but ultimately and unmistakeably SAP flavoured HTML experience? For many of us, it's "all of the above". When you consider all of these approaches, and the technologies that power them, there's a single theme that emerges: the theme of "Inside-Out". Classic dynpros, WebDynpro for Java, WebDynpro for ABAP, Business Server Pages and (gasp) home-brew solutions based on a custom set of templates are all technologies where the user experience is designed, built and pushed out from the inside of an SAP system, and exposed to the outside in the last-mile of user connectivity. That's served us well, but there's a sea-change ahead.
"SAPUI5 supports application developers in creating fast and easy User Interface Applications based on HTML5 and JavaScript."
That's from the SAPUI5 homepage on SAP's Developer Centre. I'll translate, and add an observation that may go otherwise unnoticed: SAPUI5 is a framework and a series of libraries that front-end developers can use to build compelling, non-clunky (but still SAP-focused) genuine HTML5-based applications. It's a framework that embraces (well, includes, actually) the ever popular jQuery, and has more UI controls than you can shake a stick at. It has a core UI layout called the "Shell" which is an implementation of what we might traditionally call a dynpro frame, a sort of meta-component which is as good-looking as it is flexible and adaptable.
So what might go unnoticed? The fact that this is SAP's first major UI venture which adopts - by design - an "Outside-In" approach.
Outside-In? What does that mean? It means that rather than have your UI construction weighed down and otherwise restricted by unnecessary, irrelevant and somewhat proprietary tech in the SAP system, you can approach your new applications with a fresh, unfettered and ultimately independent flexibility. Build your applications in the context of today's UI runtimes (i.e. build in HTML5 for the browser), and support your applications with data and functionality in your backend SAP systems as and when required. Build from the outside, and connect into SAP when appropriate.
What else does that mean? It means for SAP the ability to reach out to the otherwise non-SAP developers out there, the myriad mobile & desktop app-shop developer teams that are experts in constructing solid and user-focused applications. If SAP are to get anywhere near attaining the goal of reaching one billion users, then this is an approach that becomes absolutely necessary.
Last year I attended SAP TechEd in Madrid, and this year I had the privilege of giving a session at SAP's internal Developer Kick-Off Meeting (DKOM) in Karlsruhe, Germany. What I observed at both events was that in the majority of presentations and sessions that I attended, SAPUI5 was being used for the presentation layer. It seems already to have become the "goto UI framework" for SAP development. And why not? It's exactly the right approach, allowing front-end and back-end developers to shine. And if you're both, then that's ok too - as the SAPUI5 framework is relatively easy to get to grips with, especially if you have already had exposure to modern client-side JavaScript programming.
Finally, the significance is exponentially enhanced by the fact that out of the box, SAPUI5 supports data bindings for raw XML, JSON ... and OData. And we all know what that means, right? As the maturing lingua franca of SAP's API landscape, SAP NetWeaver Gateway's support of OData as the data-centric consumption protocol becomes a powerful ally of a UI framework built with the right focus from the get-go. The blog post
SAPUI5 says 'Hello OData' to NetWeaver Gateway on the SAP Community Network shows how easy consumption of Gateway-exposed OData can be from SAPUI5.
SAPUI5 is in early beta. It was released (Beta runtime 1.2.0) on the SAP Developer Network on 8th Feb this year as a standalone package for trial. As betas go, this release is extremely impressive. Tons of documentation, interactive examples, and a very complete set of components. So complete in fact that the biggest criticism so far seems to be that the framework is rather large. That's partly because none of the code has been minified (automatically re-written to be a lot more compact - something very typical in the browser-based JavaScript world where network latency and bandwidth are significant factors).
The next release is scheduled to be bundled as part of SAP's Platform-as-a-Service (PaaS) offering codenamed "Neo". There will be a standalone product, but the focus is on Neo first. They are still working on a mobile version, and there's no date for that yet.
So if you're looking at SAPUI5 for your future user interface requirements, you're on the right track, but are going to be an early adopter.
SAPUI5 is here already, and among the early adopters in the wider SAP geek community, it is receiving significant (and deserved) attention. What's more, the product team behind it is approaching the framework's growth in exactly the right way - by actively engaging the developers. It's early days, but I totally applaud SAP's direction and efforts thus far. If you're interested in rapid deployment of prototype, ad-hoc and full blown productive apps powered by your timeless SAP infrastructure, keep an eye on SAPUI5.
Multitail is something I mentioned on my Enterprise Geeks slot with Craig Cmehil and allows you to tail more than one file at once. Very useful for keeping an eye on all those log files in the instance work directory!
And screen is one of those great utilities that I put in the same class as putty and vim: absolutely essential. It allows you to maintain multiple persistent sessions on a remote *nix host. Great for disconnecting and reconnecting (especially on dodgy ānet connections) and being able to continue exactly where you left off.
I realised that people might benefit from these too, so I thought Iād offer them for you to download in binary form, so you can avoid going through the hassle of firing up the package manager and wrestling with repositories and dependencies, or building from source. I built them from source on an 64bit SUSE Linux VM ānplhostā straight from SAP, so they should work if youāre using the same as the standard VM recommended for the trial. If youāve decided on a Windows VM to run Gateway, then youāre out of luck, in more ways than one :-)
Theyāre available here: http://www.pipetree.com/~dj/2012/04/nplhost/
Download them to npladmās home directory to run them from there. Donāt forget to (a) chmod +x each of the binaries, and (b) rename the _ to . Ā for each of the dotfiles.
Share and enjoy!
]]>Hereās the post: āDJ Adams and a trip down technology laneā. If youāre after the audio file directly, itās here.
I had a lot of fun. Thanks Craig!
]]>Everything is a resource
Thereās an idea that has been a long time in gestation ā the idea of a loose coupling of data storage, front end apps, and backend command line environments. Firebase, an offering still in beta, with some features pending, has come along and seems to be delivering that. With style. Style not only in the actual UX, but in the design approach. In a recent talk on SAP NetWeaver Gateway at the SAP Developers Kick-Off Meeting (DKOM) in Karlsruhe, I had a slide that simply said:
Everything is a resource
This is a key tenet that underpins the values of REST and related directions in information architecture: that if a piece of data (or, indirectly, a business function, for that matter) is important, you should give it a name, an address ā make it a first classĀ citizen on the web. From there, everything else follows. You can manipulate it, you can describe it, and you can link to it.
With Firebase, each piece of JSON data you store in the backend gets its own URL. Each object, array, element and attribute is automatically given an address, as you create them. You can manipulate the data via the Javascript library, through a REST API and also through a lovely graphical debugger that looks like this:
**Firebase Graphical Debugger**With the debugger you can manipulate the data directly too. What Iām guessing, through the way that the Debugger operates, is that the Debugger itself is powered by Firebase. When data in the data set that youāre viewing is changed ā whether that change is initiated via the REST API or activity in a Firebase-powered application, the view in the debugger is automatically updated to show that change.
Event system
Which brings me on to the other part of Firebase thatās important ā the event system. Reading data from Firebase in your Javascript application is done by attaching asynchronous callbacks to a data location. These callbacks are triggered on data events like āvalueā, āchild_addedā, āchild_changedā and so on. So a very simple setup to be able to show when a new record was added to a dataset would be as simple as:
Like this:
var dataRef = new Firebase('http://demo.firebase.com/[...]299148/[...]QULZ4snBB/');
dataRef.on('child_added', function(snapshot) { var data = snapshot.val(); // ... }
Screencast: Stupid Firebase and SAPUI5 Tricks
On Saturday evening I had a little hack around, and found developing with Firebase fun as well as interesting. I put together a little screencast āStupid Firebase and SAPUI5 tricksā. I have been investigating the SAP UI Development Toolkit for HTML5 (aka SAPUI5) for a short while now, and thought it would be an interesting exercise to hook up some data events powered by Firebase with an SAPUI5 DataTable. And throw my favourite environment ā the Unix command line ā into the mix too.
Ā As I didnāt speak over the screencast, I thought Iād provide an annotation here.There really is little merit in this experiment; what was important for me was to see Firebase in action, and to learn something about the philosophy of the framework. I really liked what Iāve seen so far.
As I mentioned at the start, there are some features still missing from Firebase ā most notably security. So youāre completely at liberty right now to read those URLs from the screencast and start hacking with my demo data. But why do that? Better to get yourself down to the Firebase tutorial pages and build some samples for yourself.
Share and enjoy!
]]>This week I had the great honour of being invited to, andĀ speaking at SAP DKOM (Development Kick-Off Meeting) in Karlsruhe. It was a truly great event ā thousands of SAP developers attending many tracks and sessions on everything from Analytics, through Database & Technology, to Cloud, and more besides. As I sit here in Frankfurt airport on my way home, Iāve been reflecting on perhaps the best single takeaway from this event. Yes the content of the talks was great (and I enjoyed giving my session on SAP NetWeaver Gateway too). Yes the venue and organisation was second to none. Yes it was great to see the SAP Mentor wolfpack and our illustrious leader Mark Finnern.
But most of all, I saw, felt, and experienced something that I last remember from over 20 years ago in my SAP career:Ā The Developer Connection.
Back in the day, when I was (more) innocent, certainly a lot younger, and waist-deep in IBM mainframe tech, I moved around implementing and supporting R/2 installations in the UK and Europe. Esso Petroleum in London, Deutsche Telekom in Euskirchen, and so on. In those days you could catch up with all the OSS notes on your favourite topics over a couple of coffees. Most importantly however, you had connections to the developers at SAP who were building and shipping the code that you were implementing. We knew each otherās names, and in many cases, shared phone numbers or email addresses too. There was a strong bond between customers and developers ā and we worked together to make the software better.
That connection lost its way over the next few years, when SAP (consciously or unconsiously) built barriers between us. It became almost impossible in some cases to even find out the name of the developer or team responsible, let alone contact them directly.
Well ā that connection is back. And better than ever before. Both at SAP TechEd Madrid, and this week at DKOM, developers were coming and saying hello. Developers who are building the great stuff weāre exploring and using, like SAPUI5 and NetWeaver Gateway. People like you and me. We are connecting again. I think there are a number of reasons for this.
First, thereās the amazing community called the SAP Community Network (SCN ā although for me it will always be the SAP Developer Network ā SDN) that brings together developers from all sources. Then thereās SAPās re-focus on developers, and the corresponding coupling of empowerment and responsibility that SAP is giving directly to those developers. Further, thereās the inexorable turning inside out manoeuvre that SAP began a few years ago now, moving cautiously at first but now gathering pace as more and more technology directions that SAP are following are from outside the SAP universe, not inside. SAP developers naturally are connecting with the wider development community in general.
Whatever the reason, itās a great sign that the future looks exciting for SAP development as a whole. Connections, collaboration and cooperation is returning. The Developer Connection is here again.
]]>I thought it would be a nice exercise to take one of the SAPUI5 controls for a spin, namely the SearchField. It has a great many options, and wraps some jQuery functions to provide a comfortable way to expose āintellisenseā style results as you type. Itās over there on the right, in the sidebar.
From the Javascript, hereās the instantiation:var oSdnSearch = new sap.ui.commons.SearchField("sdnSearch", { startSuggestion: 2, search: function (oEvent) { var topic = oEvent.getParameter("query"); window.open(oSdnAreaMap[topic], '_blank'); }, suggest: doSuggest });
Simple as that. Iāve pulled the SDN Forum names and URLs into an object oSdnAreaMap, and have a doSuggest() function that handles the suggest event by deriving matches and filling the search results.
This was a short hack started on the hotel room balcony and finished off in the airport. One thing I havenāt got to the bottom of yet is controlling the number of displayed matches. Hope to get that nailed down soon.
Update 30 Mar 2012
After some collaboration with Ethan Jewett Iāve put the code on github, and it now also matches anywhere in the string, rather than the match being anchored at the start. Share and enjoy!
**
**
Web Programming with SAPās Internet Communication Framework with DJ Adams from Prohyena on Vimeo.
If youāre interested in attending the next instance of the course, which is in May this year, please sign up!
]]>And now for something completely different. Last week Port Street Beer House took delivery of a small number of cases of beer from the Kiuchi brewery based in Ibaraki-Ken, Japan. Craft beer from the USA? Check. Classic beers from Belgium and elsewhere in mainland Europe? Check. Amazing small-brewery beers from the UK? Double-check. But craft beer from Japan?
Beers from Japan are making an inroad into the UK via importers in Europe, Italy in particular. Port Street Beer House has heralded Hitachino Nest's arrival in Manchester by being the first establishment to stock it, in particular the Weizen, Espresso Stout, Sweet Stout, Amber Ale and the Red Rice Ale.
If the first word that comes to mind is āsake' when thinking of Japanese breweries, you're on the right track. Hitachino Nest is the main beer brand from the Kiuchi brewery, but they only started brewing beer in 1996. Over 150 years prior to that, the brewery was established by Kiuchi Gihei to brew sake from the warehouse stocks of rice collected from farmers as land tax on behalf of the dominant Mito Togugawa family in that region. After the end of the Second World War, when demand for sake increased, the Kiuchi brewery, by then under the leadership of Mikio Kiuchi, bucked the trend and remained true to quality and craftsmanship, resisting the temptation to mass-produce.
So, Red Rice Ale. Not as unusual as it sounds, rice is a common starch adjunct used in brewing beer, most famously (infamously?) used in Anheuser Busch's Budweiser pale ale. Adjuncts are used for a number of reasons, from cost saving measures (rice is cheaper than barley) to introducing taste, body and mouthfeel features. The addition of red rice is additionally interesting as traditionally it is regarded as āweedy', in other words a variety that produces fewer grains per plant than cultivated rice, and is considered a weed or a pest that grows despite, rather than because of, cultivation.
That the red rice starch adjunct is considered a weed becomes completely irrelevant when you consider the immensely positive impact of it's addition to the brew of this amber ale. With a pinkish pale colour and impressive soapy-white head, a light sweetness is at the heart of Red Rice Ale, with a fruity rice aroma on the nose reminiscent of rose water, and a subtle strawberry-laced experience throughout. I never thought I'd say this as something positive, but a waxy mouthfeel lends a distinctively pleasant note to the drinking experience. None of the 7.0% ABV strength is evident (except when I walk from the bar to a nearby table to write this review), and the beer is a very easy drinking experience.
Hitachino Nest has been established in the USA for a decade or so now, and rightly so. With its distinctive Owl logo, quality top-fermented beers and innovative techniques, it's only a matter of time until they're established over here too. Until then, get yourself down to Port Street, and see for yourself. You won't be disappointed.
Wow, those folks have certainly put together some nice documentation already! Try it for yourself - once downloaded, open the demokit directory and you should be presented with a nice (SAPUI5-powered) overview, developer guide, controls and API reference:
The framework is based upon JQuery / UI and contains a huge number of controls. It also supports data bindings, and one thing that had intrigued me from the podcast was that data bindings were possible to arbitrary JSON and XML ā¦ and OData resources.
"Gosh", I hear you say, "that reminds me of something!" Of course, SAP NetWeaver Gateway's REST-informed data-centric consumption model is based upon OData. So of course I immediately was curious to learn about SAPUI5 with an OData flavour. How could I try out one of the controls to surface information in OData resources exposed by NetWeaver Gateway?
What I ended up with is SAPUI5's DataTable control filled with travel agency information from my copy of the trial NetWeaver Gateway system, via an OData service all ready to use. You can see what I mean in this short screencast.
Here's what I did to get the pieces together. I'm assuming you've got the trial Gateway system installed and set up (you know, fully qualified hostname, ICM configured nicely, and so on), and that you're semi-familiar with the SFLIGHT dataset.
Check with transaction /iwfnd/reg_service, for the LOCAL system alias, that the service RMTSAMPLEFLIGHT is available, as shown here.
Check you can see the service document by clicking the Call Browser button (you may need to provide a user and password for HTTP basic authentication). You can also check the data by manually navigating to the TravelagencyCollection by following the relative href attribute of the app:collection element as shown here:
In other words, navigate from something like this:
http:port/sap/opu/sdata/IWFND/RMTSAMPLEFLIGHT/?$format=xml
to this:
http:port/sap/opu/sdata/IWFND/RMTSAMPLEFLIGHT/TravelagencyCollection?$format=xml
(The $format=xml
is to force the service to return a less exotic Content-Type of application/xml rather than an Atom-flavoured one, so that the browser is more likely to render the data in human-readable form.)
Following this href should show you some actual travel agency data in the form of entries in an Atom feed (yes, "everything is a collection/feed!"):
Make your SAPUI5 framework accessible. To avoid Same Origin Policy based issues in your browser, get your Gateway's ICM to serve the files for you. Create a 'sapui5' directory in your Gateway's filesystem:
/usr/sap/NPL/DVEBMGS42/sapui5/
unpack the SAPUI5 framework into here, and add an instance profile configuration parameter to tell the ICM to serve files from this location:
icm/HTTP/file_access_5 = PREFIX=/sapui5/, DOCROOT=$(DIR_INSTANCE)/sapui5/, BROWSEDIR=2
(here I have 5 previous file_access_xx
parameters, hence the '5' suffix in this example)
and when you restart the ICM it should start serving the framework to you:
Actually, calling it an application is far too grand. But you know what I mean. Now we have the SAPUI5 framework being served, and the OData service available, it's time to put the two together.
Here's the general skeleton of the application - we pull in SAPUI5, and have an element in the body where the control will be placed:
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>SAP OData in SAPUI5 Data Table Control</title>
<!-- Load SAPUI5, select theme and control libraries -->
<script id="sap-ui-bootstrap"
type="text/javascript"
src="http://gateway.server:port/sapui5/sapui5-static/resources/sap-ui-core.js"
data-sap-ui-theme="sap_platinum"
data-sap-ui-libs="sap.ui.commons,sap.ui.table">
</script>
<script>
...
</script>
</head>
<body>
<h1>SAP OData in SAPUI5 Data Table Control</h1>
<div id="dataTable"></div>
</body>
</html>
In the final step we'll have a look at what goes in the "ā¦" bit.
So now it's time to flex our Javascript fingers, stand on the shoulders of giants, and write a few lines of code to invoke the SAPUI5 power and glory.
What we need to do is:
Simples!
Creating the DataTable control goes like this (but you must remember to add the control library to the data-sap-ui-libs
attribute when loading SAPUI5 - see Step 3):
var oTable = new sap.ui.table.DataTable();
Each column is added and described like this:
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "Agency Name"}),
template: new sap.ui.commons.TextView().bindProperty("text", "NAME"),
sortProperty: "NAME"
}));
There are three different templates in use, for different fields - the basic TextView, the TextField and the Link.
The OData data model is created like this, where the URL parameter points to the service document:
var oModel = new sap.ui.model.odata.ODataModel("http://gateway.server:port/sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT");
It's then linked to the control like this:
oTable.setModel(oModel);
The specific OData resource TravelagencyCollection is bound to the control's rows like this:
oTable.bindRows("TravelagencyCollection");
And then the control is placed on the page like this:
oTable.placeAt("dataTable");
I've put the complete code in a Github Gist for you to have a look at.
What you end up with is live data from your SAP Gateway system that is presented to you like this:
Share and enjoy!
]]>One of the challenges in getting the Data Browser to consume OData resources exposed by NetWeaver Gateway (get a trial version, available from the Gateway home page on SDN) was serving a couple of XML-based domain access directive files as described in "Making a Service Available Across Domain Boundaries" - namely clientaccesspolicy.xml
and crossdomain.xml
, both needing to be served from the root of the domain/port based URL of the Gateway system. In other words, the NetWeaver stack needed to serve requests for these two resources:
http:port/clientaccesspolicy.xml
and
http:port/crossdomain.xml
Without these files, the Data Browser will show you this sort of error:
A SecurityException has been encountered while opening the connection.
Please try to open the connection with Sesame installed on the desktop.
If you are the owner of the OData feed, try to add a clientaccesspolicy.xml
file on the server.--
So, how to make these two cross domain access files available, and specifically from the root? There have been some thoughts on this already, using a default service on the ICF's default host definition, or even dynamically loading the XML as a file into the server cache (see the ABAP program in this thread in the RIA dev forum).
But a conversation on Twitter last night about BSPs, raw ICF and even the ICM reminded me that the ICM is a powerful engine that is often overlooked and underloved. The ICM - Internet Communication Manager - is the collection of core HTTP/SMTP/plugin services that sits underneath the ICF, and handle the actual HTTP traffic below the level of the ICF's ABAP layer. In the style of Apache handlers, there are a series of handlers that the ICM has to deal with plenty of HTTP serving situations - Logging, Authentication, Server Cache, Administration, Modification, File Access, Redirect, as well as the "ABAP" handler we know as the ICF layer.
Could the humble ICM help with serving these two XML resources? Of course it could!
The File Access handler is what we recognise from the level 2 trace info in the dev_icm tracefile as HttpFileAccessHandler. You all read the verbose traces from the ICM with your morning coffee, right? Just kidding. Anyway, the File Access handler makes its features available to us in the form of the icm/HTTP/file_access_
With a couple of these file_access parameters, we can serve static clientaccesspolicy.xml
and crossdomain.xml
files straight from the filesystem, matched at root. Here's what I have in my /usr/sap/NPL/SYS/profile/NPL_DVEBMGS42_nplhost
parameter file:
icm/HTTP/file_access_1 = PREFIX=/clientaccesspolicy.xml,
DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=clientaccesspolicy.xml
icm/HTTP/file_access_2 = PREFIX=/crossdomain.xml,
DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=crossdomain.xml
(I already have file_access_0
specifying something else not relevant here).
What are these parameters saying? Well the PREFIX specifies the relative URL to match, the DOCROOT specifies the directory that the ICM is to serve files from in response to requests matching the PREFIX, and DIRINDEX is a file to serve when the 'index' is requested. Usually the PREFIX is used to specify a directory, or a relative URL representing a 'container', so the DIRINDEX value is what's served when there's a request for exactly that container. The upshot is that the relevant file is served for the right relative resource. The files are in directory /usr/sap/NPL/DVEBMGS42/qmacro/
.
While we're at it, we might as well specify a similar File Access handler parameter to serve the favicon, not least because that will prevent those pesky warnings about not being able to serve requests for that resource, if you don't have one already:
icm/HTTP/file_access_3 = PREFIX=/favicon.ico,
DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=favicon.ico
The upshot of all this is that the static XML resources are served directly by the ICM, without the request even having to permeate up as far as the ABAP stack:
Handler 5: HttpFileAccessHandler matches url: /clientaccesspolicy.xml
HttpSubHandlerCall: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_REQUEST(1), header_len=407
HttpFileAccessHandler: access file/dir: /usr/sap/NPL/DVEBMGS42/qmacro
HttpFileAccessHandler: file /usr/sap/NPL/DVEBMGS42/qmacro/clientaccesspolicy.xml modified: -1/1326386676
HttpSubHandlerItDeactivate: handler 4: HttpFileAccessHandler
HttpSubHandlerClose: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_CLOSE(3)
and also that the browser-based Sesame Data Browser can access your Gateway OData resources successfully:
(Sesame Data Browser Screenshot lost in SAP community platform migration)
Success!
If you're interested in learning more about the Internet Communication Manager (ICM) and the Internet Communication Framework (ICF), you might be interested in my Omniversity of Manchester course:
Web Programming with SAP's Internet Communication Framework
Which is currently running in March (3rd and 4th) and May (9th and 10th) in Manchester.
]]>In preparation for the previous instance of the course last year, we shot a video with yours truly explaining what the course was about and why you should attend.
Omniversity : Web Programming with SAPās Internet Communication Framework from Madlab on Vimeo.
Madlab have their own semi-resident video expert and in the run up to the next course weāre going to shoot a new video with lots of exciting content! Well, I guess you might call it exciting if you are into SAP tech and seeing debugging activity in slow motion.
Anyway, watch this space ā next week Iām over at Madlab again for the shoot. Perhaps I should get a haircut. Or a wig.
]]>But thereās something about the general term āInformation Dietā that has me concerned, and has caused me to write this post (and therefore produce ā win!). Yes, reduce your TV viewing (I donāt watch much anyway, and we donāt have satellite or cable). Yes, reduce your general browsing, and certainly try to move away from ācontinuous partial attentionā towards āmanaged full attentionā (perhaps usingĀ Pomodoro or similar techniques). But donāt treat this like a typical diet. Just like your body, your mind needs energy, and whatās more, it needs feeding. With the right sources. Donāt think you have toĀ reduce your information intake. Rather, make sure that the information you consume is protein, good carbs, fibre and the like. Last year I started to exercise in earnest again, and am consuming more than before. But Iām consuming the right foods ā oily fish, fruit, veg, nuts, and so on. And Iām feeling pretty healthy on it.
Donāt worry about consuming less. Donāt worry aboutĀ dieting. Concern yourself about theĀ quality of what you consume. I have a Kindle, and combined withĀ Instapaper, consume more excellent, stimulating, educational and thought-provoking articles than ever (hereās some background that goes some way to explaining my reading appetite). And just as my consumption of the right foodstuffs (with exercise) has increased my health and wellbeing, so my consumption of the right infostuff has increased my knowledge, and exercised my brain. Yes, certainly aim to produce more, but look toĀ what you consume, rather than how much.
His team has an enormous scope, covering Mobile, In-Memory, On-Demand, HANA and more. While the word āMarketingā might be auto-filtered by a techieās radar-filter, what became clear very quickly is that this group is totally developer focused. His group is already building a brand-new Developer Center (I wrote about this earlier this week) and is focused on helping the developer help themselves. Whatās more, the group is staffed with developers. Iāve not managed to find anyone in TIP yet that doesnāt have a developer background.
Hasso is reported to have said ādevelopers are the key to successā, and of course, we all know thatĀ Developers Are The New Kingmakers. What becomes clear in this interview, is that thereās a re-focus on the developer in the space that spans the distance between mobile and enterprise. This re-focus is long overdue in our industry, so I applaud SAP for having the courage to lead on this. Yes, SAP will benefit because one of the keys to a successful mobile platform is a host of developers in the non traditional-SAP space. But if the message and focus builds, the developer at large will benefit even more.
Perhaps this is a milestone along the way to the upcoming Developer Renaissance?
The interview is here:Ā http://www.sapvirtualevents.com/teched/sessiondetails.aspx?sId=841
]]>HTML5 is one of those technologies. While not so much a surprise, whatās more revealing, and encouraging, is that itās being given decent coverage at SAP TechEd this year. The adoption of HTML5 as the core of a new UI library (originally codenamed āPhoenixā) for app front-ends is something that has a voice here. Look at the TechEd sessions available:
Thatās not to say that this is breaking news ā Thomas Jung (an SAP Mentor from SAP Labs) made reference to Phoenix in an interview with Jon Reed a few months ago. Furthermore, in a very useful chat with SAPās Chris Whealy on Monday after InnoJam, I got to understand more about the philosophy and approach of SAP NetWeaver Gatewayās exposure of data objects and their relationships in a way that would make HATEOAS pay attention. And Chris used an early version of the UI library to present the exposed data. This seems to be a common theme internally in SAP, at least.
So whatās the deal? In EXP443 I learned that the library is built upon jQuery. So SAP are avoiding the NIH syndrome, thatās good. But there were other attendees that were questioning SAPās decision to build Yet Another Javascript Ui Library. At the very least, the model implementation of the libraryās MVC framework gives the wily Javascript hacker a head-start on using and consuming Gateway services. And in my opinion thatās the deal. Yes, we have a very nice UI library (and no, itās not available until 2Q12, before you ask!) but we also have code that speaks the language of thousands of front-end developers on the one hand, and eases the connection to the proprietary back-end on the other.
SAPās future lies with developers, and theyāre embracing those developers in many different ways (the Technology & Innovation Platform team is one group that is making seriously good moves in this direction ā but thatās a story for another time). HTML5 adoption by SAP was most likely part of a scratching of an internal itch, but it implicitly embraces non-SAP developers in potentially far-reaching ways. Great stuff.
]]>Enter the SAP Developer Center (Iāll keep to the US spelling of this for consistency!) ā which is pretty much exactly what Iām looking for. Itās not live yet, but when it is (weāre talking 1Q12), it looks to be a killer resource centre. Right now itās in beta testing, specifically with HANA. Think code.google.com / developer.google.com with a cloud-based offering of trial instances on demand. A go-to resource centre for building your skills in the new era of SAPās technology platform.
Sounds good? I think it sounds great! Look out for it appearing as part of the SAP Community Network soon.
Update 06 Feb 2012: See the article āWhat Would You Like To Develop Today?ā in the Jan 2012 edition of SAP Insider for more on The Developer Center.
]]>Iām taking part inĀ Movember, a fun and serious movement where moustaches are grown, ridicule is thrown, and hopefully people become more aware of menās health in general, and prostate and testicular cancer in particular.
The idea is that you start clean-shaven on 1st Nov, and grown your moustache through the month, raising money along the way. I have a Movember page here:
Please visit and donate what you can ā Iāll be very grateful, thank you!
]]>SAPās Internet Communication Framework (ICF) is the platform that underpins the majority of SAPās offerings in this space, even SAP NetWeaver Gateway. This 2-day course will help you gain a detailed understanding of the framework, harness its power, and unleash your own resource orientated web service masterpieces!
Dates in March and May are available; follow the links to find out more and to book a place:
Alternative Dispatcher Layer
One of the topics covered in Day 2 on this course is the Alternative Dispatcher Layer (ADL), a lightweight alternative approach to building web applications, an approach informed and influenced by other libraries and frameworks such as the Python webapp framework in Google App Engine. Read more about the origins of ADL in this SAP Developer Network post:Ā A new REST handler / dispatcher for the ICF.
If youāre after more background, see this post from earlier this year: Stand Steady on the Shoulders of Giants
Looking forward to seeing you on the course!
]]>Life at AstraZeneca has been great; itās one of the friendliest places to work, the people are great, and the location and facilities are second to none. I also met my wife and theoretical childhood sweetheart Michelle here. Itās not all been a bed of roses of course (nowhere is!) at times the work has been frustrating and increasingly there are too many layers between me and the code surface.
At heart Iām a coder and builder, driven by curiosity and the desire to learn, teach and implement.
So itās with great excitement that, in January 2012, Iām joining Bluefin Solutions as a permanent member of the team. Iāve known many of the gang at Bluefin for a while, and feel as though I already have a lot in common with them. Iāve spoken at the Northern IT Directorsā Round Table for them, was their first guest blogger, and have bumped into many of them during SAP orientated events from SAP Evenings to SAP TechEds and beyond.
My official title will be āSenior SAP Development Architectā but thereās also an āEvangelistā flavour to my role, which Iāll be embracing and making my own. What attracts me to Bluefin in addition to the quality of their people is their drive, their leadership and their embrace of technology, and with that in mind, Iām really looking forward to helping research, steer and shape innovation in the Enterprise in the near future.
Hooray!
]]>Sometimes you're not in the mood for what everyone else is having. That's the tagline of this Longmount, Colorado brewer Left Hand Brewing Co's Twitter presence. As I approach the bar at Port Street Beer House and observe the orders for a seemingly endless collection of beers, one bottle calls out to me from the fridge. Milk Stout. Exactly what I'm looking for. This beer's reputation precedes it; awards galore already won, most recently Gold in the European Beer Star Competition.
Sunlight streams through the windows on this cold, crisp autumn day as I reverently carry the bottle and a stemmed glass to the table. This is not your father's stout. No sense of vast volumes of heavy blackness tinged with bitterness here, thank you very much. This is a full-bodied sweet stout, an English style beer from the late 19th century. Espresso coloured, with coffee traces and slight vanilla notes, this is an incredibly velvety smooth experience from start to finish. Any hints of bitterness are more than balanced from the inclusion of milk sugar, which is defined as "a sugar comprising one glucose molecule linked to a galactose molecule". Galactose? Space milk? All I know is that the inclusion of milk sugar into the brew has had a fabulous effect. Sweetness and chocolate overtones make this a very enjoyable experience. Normally at this stage in the review I have some beer left in the glass, but the glass and bottle are both empty already.
Left Hand Brewing Co's philosophy is about balance. It's fair to say that they've achieved a great balance between traditional style and modern interpretation, between the Magnum and US Golding Hops, the myriad malts (from Crystal to Flaked Barley and Chocolate) and the milk sugar sweetness, and between the relatively high ABV content and inherent drinkability. Next time you're stuck or spoiled for choice, go for something different. Take a chance on this Milk Stout, and you'll be far from disappointed.
Parts Overview
Putting it all together
So at this stage weāve done pretty much everything required for this example app. The final task is to extend the standard Spreadsheet menu to give the user access to the custom features of selecting a tasklist, and kicking off an update (URL pull and synchronisation). Itās very easy to extend the menu; in a few lines of code weāre going to end up with something like this:
Itās as simple as this:
function onOpen() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var menuEntries = [
{name: "Update", functionName: "update"},
{name: "Select Task List", functionName: "taskListUi"}
];
ss.addMenu("Articles", menuEntries);
}
We use theĀ addMenu() method of the Spreadsheet class to create a new menu entry with an array of objects representing menu items. And the function name? onOpen() is one of a number of built-in simple event handler functions; this one runs automatically when a spreadsheet is opened ā an ideal time to extend the menu.
The complete script
So weāre done with the final part! Letās celebrate with the script in its entirety. And a beer. Cheers!
// -------------------------------------------------------------------------
// Constants
// -------------------------------------------------------------------------
APIKEY = 'AIzaSyANY6ebMr2bi1Fzn-53kysp0y4LsbZA488';
ACTIVITYLISTURL = 'https://www.googleapis.com/plus/v1/people/{userId}/activities/{collection}';
READINGLISTCELL = 'C1';
USERIDCELL = 'D1'
USERID = '106413090159067280619'; // Mahemoff
// -------------------------------------------------------------------------
// update()
// Pulls in article links into sheet and synchronises with task list
// -------------------------------------------------------------------------
function update() {
// First, check that we have a tasklist id already; it's stored in
// the comment section of the 'readinglistcell'
var sh = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var taskListId = sh.getRange(READINGLISTCELL).getComment();
// If we don't have an id, tell the user to choose a tasklist
if(taskListId === '') {
SpreadsheetApp.getActiveSpreadsheet().toast(
"Use Articles -> Select Task List to choose a task list",
"No Task List",
5
);
// Otherwise, we know which task list to synchronise with, so
// go and update the reading list with URLs from the Google+ activity
// list, and then sync that with the task list items
} else {
retrieveActivityUrls_();
synchronise_(taskListId);
}
}
// -------------------------------------------------------------------------
// taskListUi()
// Displays a Ui to allow the user to select a tasklist to manage
// the reading tasks. Can select an existing task list or create a new one
// -------------------------------------------------------------------------
function taskListUi() {
var doc = SpreadsheetApp.getActiveSpreadsheet();
var app = UiApp.createApplication();
app.setTitle('Task Lists');
// We'll have a grid and a button in this
// vertical panel
var panel = app.createVerticalPanel();
// Use a listbox to display a choice of existing tasklists
var lb = app.createListBox(false);
lb.setName('existingList');
var tasklists = getTasklists_();
for (var tl in tasklists) {
lb.addItem(tasklists[tl].getTitle());
}
// Use the grid to layout the listbox, a textbox for a new list,
// and some corresponding labels
var grid = app.createGrid(2, 2);
grid.setWidget(0,0, app.createLabel("Existing:"));
grid.setWidget(0,1, lb);
grid.setWidget(1,0, app.createLabel("Or new:"));
grid.setWidget(1,1, app.createTextBox().setName('newList'));
// The only button; handler will be linked to this button click event
// Remember to add the grid contents to the callback context
var button = app.createButton("Choose");
var chooseHandler = app.createServerClickHandler('handleChooseButton_');
chooseHandler.addCallbackElement(grid);
button.addClickHandler(chooseHandler);
// Put it all together and show it
panel.add(app.createLabel("Select existing or create new list"));
panel.add(grid);
panel.add(button);
app.add(panel);
doc.show(app);
}
// -------------------------------------------------------------------------
// handleChooseButton_(e)
// Handler for 'Choose' button on taskListUi Ui; creates a new task list
// if a new one has been specified; grabs the ID of the chosen task list
// and stores the task list name and id in the TASKLISTCELL
// -------------------------------------------------------------------------
function handleChooseButton_(e) {
// Assume an existing list was chosen
var selectedList = e.parameter.existingList;
// But check for a new list being specified; if it as, create
// a new task list
if(e.parameter.newList != '') {
selectedList = e.parameter.newList;
var newTaskList = Tasks.newTaskList().setTitle(selectedList);
Tasks.Tasklists.insert(newTaskList);
}
// Grab the list of tasklists, because we'll need the id
var taskLists = getTasklists_();
var taskListId = -1;
for(tl in taskLists){
if(taskLists[tl].getTitle() === selectedList) {
taskListId = taskLists[tl].getId();
break;
}
}
// Record the list name and id
var sh = SpreadsheetApp.getActiveSheet();
var cell = sh.getRange(READINGLISTCELL);
cell.setValue(selectedList);
cell.setComment(taskListId);
// Close the Ui popup and display the name of the chosen list
var app = UiApp.getActiveApplication();
app.close();
SpreadsheetApp.getActiveSpreadsheet().toast(selectedList, "Selected List", 3);
return app;
}
// -------------------------------------------------------------------------
// onOpen()
// Event-based function called when the spreadsheet is opened; adds items
// to the menu
// -------------------------------------------------------------------------
function onOpen() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var menuEntries = [ {name: "Select Task List", functionName: "taskListUi"},
{name: "Update", functionName: "update"} ];
ss.addMenu("Articles", menuEntries);
}
// -------------------------------------------------------------------------
// getTasklists()
// Retrieve a list of the user's tasklists (uses the APIs Services)
// Note that the Tasks Services docu is not accurate here; we would
// expect to be able to use the TasklistsCollection class.
// -------------------------------------------------------------------------
function getTasklists_() {
var tasklistsList = Tasks.Tasklists.list();
return tasklistsList.getItems();
}
// -------------------------------------------------------------------------
// retrieveActivityUrls_()
// Use UrlFetch to retrieve a Google+ API resource: activities for a person
// Use Javascript data structures; restrict the number of API calls
// -------------------------------------------------------------------------
function retrieveActivityUrls_() {
// Grab existing list of URLs
var sh = SpreadsheetApp.getActiveSheet();
var lastRow = sh.getLastRow();
var urlList = sh.getRange(2, 1, lastRow - 1) .getValues();
var list = {'old': {}, 'new': []};
for (var i in urlList){
list['old'][urlList[i]] = 1;
}
// Use the userid in the sheet, fallback to a favourite :)
var userid = sh.getRange(USERIDCELL).getValue() || USERID;
// Build Google+ API resource and retrieve it; parse JSON content
var actListUrl = buildActivityListUrl_(userid, 'public', APIKEY);
var jsonString = UrlFetchApp.fetch(actListUrl).getContentText()
var activities = Utilities.jsonParse(jsonString);
// We're looking for the item object attachments, where the
// attachment's objectType is 'article'. We want the url and displayName
for (var i in activities.items) {
var attachments = activities.items[i].object.attachments;
for (var a in attachments) {
var attachment = attachments[a];
// We've got a URL and title; store it as new if it doesn't
// already exist. Store it as list of lists, ready for
// a setValues([][]) insert
if (attachment.objectType == 'article') {
if (! (attachment.url in list['old'])) {
list['new'].push([attachment.url, attachment.displayName]);
}
}
}
}
// Blammo!
if (list['new'].length) {
sh.getRange(lastRow + 1, 1, list['new'].length, 2).setValues(list['new']);
}
}
// -------------------------------------------------------------------------
// synchronise(taskListId)
// Synchronise the URLs in the spreadsheet with items in the chosen tasklist
// The task list item id for a URL is stored in the comment for that URL cell
// -------------------------------------------------------------------------
function synchronise_(taskListId) {
// Grab list of all URLs, and associated comments
var sh = SpreadsheetApp.getActiveSheet();
var urlRange = sh.getRange(2, 1, sh.getLastRow() - 1, 1);
var urls = urlRange.getValues();
var comments = urlRange.getComments();
// For each URL, check the status of the associated task.
// If there isn't an associated task, create one.
for (var i = 0, j = urls.length; i < j; i++) {
if (comments[i] == "") {
Logger.log("New task");
var task = Tasks.newTask();
task.setTitle(urls[i]);
var newTask = Tasks.Tasks.insert(task, taskListId);
sh.getRange(i + 2, 1).setComment(newTask.getId());
} else {
Logger.log("Existing task");
var existingTask = Tasks.Tasks.get(taskListId, comments[i][0]);
if (existingTask.getStatus() === "completed") {
sh.getRange(i + 2, 1, 1, 2).setFontLine('line-through');
}
}
}
}
// -------------------------------------------------------------------------
// buildActivityListUrl_(userId, collection, apiKey)
// Creates a specific resource address (URL) for the public activities
// for a given person in Google+
// See https://developers.google.com/+/api/latest/activities/list
// This will be obsolete when there are direct Google+ Services for
// Apps Script
// -------------------------------------------------------------------------
function buildActivityListUrl_(userId, collection, apiKey) {
var actListUrl = ACTIVITYLISTURL;
actListUrl = actListUrl.replace(/{userId}/, userId);
actListUrl = actListUrl.replace(/{collection}/, collection);
actListUrl = actListUrl + '?key=' + apiKey;
return actListUrl;
}
]]>
Parts Overview
Putting this into context: the Update request
Weāve covered a lot of ground in the previous three parts in this series. Now weāre at the stage where we have the functions for
So the one main piece of work outstanding is synchronising the retrieved URLs as tasks on the chosen tasklist.
If you watch the screencast shown inĀ Part 1 youāll see that the synchronisation is part of a more general āupdateā request, that includes the fetching of new URLs from Google+ and synchronising them with the tasklist. So letās have a look at the function that binds those two things together.
Hereās the update() function, which weāll allow the user to call from a menu item (weāll cover this in the next instalment).
READINGLISTCELL = 'D1'; function update() { // First, check that we have a tasklist id already; it's stored in // the comment section of the 'readinglistcell' var sh = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet(); var taskListId = sh.getRange(READINGLISTCELL).getComment(); // If we don't have an id, tell the user to choose a tasklist if(taskListId === '') { SpreadsheetApp.getActiveSpreadsheet().toast( "Use Articles -> Select Task List to choose a task list", "No Task List", 5 ); // Otherwise, we know which task list to synchronise with, so // go and update the reading list with URLs from the Google+ activity // list, and then sync that with the task list items } else { retrieveActivityUrls_(); synchronise_(taskListId); } }
This function grabs a reference to the active sheet, and pulls the comment from the cell that weāve designated as where the reading list tasklist info is stored: READINGLISTCELL. The name is stored in the cell, and the ID is stored in the cellās comment. If there isnāt an ID, then weāll ask the user to choose a tasklist using the Ui we built in Part 2. The Browser class in Google Apps Scriptās Base Services gives us a nice dialog box that looks like this:
But thereās also a nice visual message feature thatās available in the [Spreadsheet Services](http://code.google.com/googleapps/appsscript/service_spreadsheet.html), specific to a spreadsheet: [toast()](http://code.google.com/googleapps/appsscript/class_spreadsheet.html#toast). Calling this causes a popup to appear in the lower right of the screen, which stays visible for a short while. This is what it looks like: Because the ātoastā name is so evocative, weāll use it in our function to prompt the user to choose a tasklist.If thereās already a tasklist chosen, then we go straight into retrieving the URLs (see Part 3) and then call the synchronise_() function, passing the ID of the tasklist.
Synchronising URLs and Tasks
Ok, so what do we need to do to synchronise the URLs? Itās similar to the technique described in the great article āIntegrating with Google APIs ā Creating a simple reading listā. There are a couple of differences: Iām not going to use the UrlShortener Services, and Iām going to try and reduce the number of API calls by bulk-grabbing the cell data.
First, we get a range reference on the active sheet, which equates to the list of URLs already there. We get all of the URLs (urlRange.getValues()) and all of the corresponding comments (urlRange.getComments()).
function synchronise_(taskListId) { // Grab list of all URLs, and associated comments var sh = SpreadsheetApp.getActiveSheet(); var urlRange = sh.getRange(2, 1, sh.getLastRow() - 1, 1); var urls = urlRange.getValues(); var comments = urlRange.getComments();
We go through each of the URLs, and create a new task in the tasklist if there isnāt already something in the comment for that URL:
Otherwise weāve already created a task for the URL, so we grab the task to get the status, and if itās marked as completed, we format the URL and corresponding description (in the next column) to set strike-through text.
// For each URL, check the status of the associated task. // If there isn't an associated task, create one. for (var i = 0, j = urls.length; i < j; i++) { if (comments[i] == "") { Logger.log("New task"); var task = Tasks.newTask(); task.setTitle(urls[i]); var newTask = Tasks.Tasks.insert(task, taskListId); sh.getRange(i + 2, 1).setComment(newTask.getId()); } else { Logger.log("Existing task"); var existingTask = Tasks.Tasks.get(taskListId, comments[i][0]); if (existingTask.getStatus() === "completed") { sh.getRange(i + 2, 1, 1, 2).setFontLine('line-through'); } } } }
Thatās it. Stop by next time for the last part in this series, where we put everything together and insert a 2-item menu entry to tie it all together. Thanks for reading!
]]>Parts Overview
UrlFetch Services
If youāve ever used an HTTP client library in other contexts, youāll be completely at home with the base classes available in the UrlFetch Services. Following the simplest thing that could possibly work philosophy, all we need to do to fetch a resource and grab the payload is to use the UrlFetchApp class, specifically the fetch() method. It returns an HTTPResponse object, which has everything you need: content, headers and response code.
Hereās an example of getting the signature from the server that serves this site:
var response = UrlFetchApp.fetch('http://www.pipetree.com/'); Logger.log(response.getHeaders()['Server']);
--> Apache/2.2.14 (Ubuntu)
The Google+ API largely follows a RESTful design, which means that we can use the UrlFetch Services to interact with it.
The Google+ API
The Google+ API is relatively new, and at the moment, read-only. This is fine for what we want to use it for in this example. There are two aspects of the API that are relevant for us:
The UrlFetch Services provides us with a facility in the form of the OAuthConfig class for configuring and managing OAuth in a client context. But weāll go for the simpler approach and use an API key, which we can obtain by using the Google API Console ā see the previous instalment of this series for more details about this: Using the Tasks API to retrieve and insert tasklists, and the Ui Services to build the tasklist chooser component.
The idea for this example app is to capture a list of URLs that a person on Google+ has posted, and perhaps commented on. We can get this info from the Activities part of the API.
To get the activity stream for a given person, we need to retrieve the following resource:
https://www.googleapis.com/plus/v1/people/{userId}/activities/{collection}
The {userId} is the Google+ ID of the person, and {collection} in this case is āpublicā, the only collection available right now. In addition we need to specify our API key on a ākeyā parameter in the query string. The default representation is JSON. This is what we get back as a result (heavily elided for brevity):
{ "kind": "plus#activityFeed", "title": "Plus Public Activity Feed for Martin Hawksey", "id": "tag:google.com,2010:/plus/people/1146628[...]/activities/public", "items": [ { "kind": "plus#activity", "title": "Latest post from me. Elevator pitch: [...]", "id": "z12cxlppixzwjbqzi04cdnvg1wbyflbz3r00k", "url": "https://plus.google.com/1146628[...]", "verb": "post", "object": { "objectType": "note", "content": "Latest post from me. Elevator pitch: Service [...]", "originalContent": "", "url": "https://plus.google.com/1146628[...]", "attachments": [ { "objectType": "article", "displayName": "SpreadEmbed: Turning a Google Spreadsheet [...]", "url": "http://mashe.hawksey.info/2011/10/spreadembed/" }, { "objectType": "photo", "image": { "url": "http://images0-focus-opensocial.google[...]", "type": "image/jpeg" }, "fullImage": { "url": "http://mcdn.hawksey.info/content/images/[...]", "type": "image/jpeg", "height": 204, "width": 350 } [...]
Even after heavy eliding for this blog post, thatās still an awful lot of JSON, but weāre only actually interested in the URLs that the person links to. We can spot these in the āplus#activityā items array, as attachments with objectType āarticleā ā they have url and displayName attributes:
{ "items": [ { "kind": "plus#activity", "object": { "attachments": [ { "objectType": "article", "displayName": "SpreadEmbed: Turning a Google Spreadsheet [...]", "url": "http://mashe.hawksey.info/2011/10/spreadembed/" }, [...]
Partial Responses
And it just so happens that in the interests of efficiency, Google offers partial responses, in the form of a fields parameter. So we can add this parameter to the query string, with an XPath-style value like this:
fields=items/object/attachments(url,displayName)
So the resulting JSON representation is a lot lighter, like this:
{ "items": [ { "object": { "attachments": [ { "displayName": "SpreadEmbed: Turning a Google Spreadsheet[...]", "url": "http://mashe.hawksey.info/2011/10/spreadembed/" } ] } }, ] }
Much better!
Retrieving the Activity Data
So now itās time to have a look at the code that will retrieve the activity info and insert the URLs into the spreadsheet. Weāll do this in a single function retrieveActivityUrls_(), which will
Letās go!
First, some constants.
APIKEY = 'AIza[...]drBs'; // (get your own!) ACTIVITYLISTURL = 'https://www.googleapis.com/plus/v1/people/{userId}/activities/{collection}'; USERIDCELL = 'B1'; USERID = '106413090159067280619'; // Fallback: Mahemoff!
Now for the function. We get a handle on the active sheet, note the last row (which denotes where the list of URLs currently ends), and gets those URLs. Weāre assuming that the list starts at row 2, i.e. thereās a header line in row 1. The resulting urlList array is two dimensional, although as weāve specified we only want 1 column width of values, the data will look something like this:
[[http://cloud9ide.com], [http://jsconf.eu], [...]]
We create an object to hold the existing (āoldā) URLs, and the eventual ānewā URLs about to be retrieved. Weāre using an object āoldā for the existing URLs so we can easily check whether a new one is in the list or not. We just need to use an array for the ānewā URLs.
function retrieveActivityUrls_() { // Grab existing list of URLs var sh = SpreadsheetApp.getActiveSheet(); var lastRow = sh.getLastRow(); var urlList = sh.getRange(2, 1, lastRow - 1 || 1) .getValues(); var list = {'old': {}, 'new': []}; for (var i in urlList){ list['old'][urlList[i]] = 1; }
Weāre going to retrieve the activity for a Google+ person, and the person is identified by an ID either in a cell in the sheet identified by the range in constant USERIDCELL, (see the screencast in Part 1) or a default specified in constant USERID.
// Use the userid in the sheet, fallback to a favourite var userid = sh.getRange(USERIDCELL).getValue() || USERID;
Now we have enough information to build the Google+ API resource URL, so we call a helper function buildActivityListUrl_() passing it the user ID, the collection (āpublicā), and our API key. (Weāll look at buildActivityListUrl_() shortly.) We use the UrlFetchApp fetch() method to grab the resource, calling getContentText() to obtain the JSON content. And with a JSON parser available in the Utilities Services, we quickly have all we need to retrieve those URLs posted in the activity list in the āactivitiesā object.
// Build Google+ API resource and retrieve it; parse JSON content var actListUrl = buildActivityListUrl_(userid, 'public', APIKEY); var jsonString = UrlFetchApp.fetch(actListUrl).getContentText(); var activities = Utilities.jsonParse(jsonString);
From examining the JSON representation of the activities earlier in this post, we know weāll be expecting items, and within each item an object member, and within that object member a number of attachments. Weāre only interested in those attachments of type āarticleā, and if we find one, we want the url and the displayName.
If weāve got an article attachment, we then need to determine whether itās a new URL or one we have already. Thatās where the list object comes in. Unless we can find the URL in the āoldā object, itās a new one so we need to add it to the ānewā list.
// We're looking for the item object attachments, where the // attachment's objectType is 'article'. We want the url and displayName for (var i in activities.items) { var attachments = activities.items[i].object.attachments; for (var a in attachments) { var attachment = attachments[a]; // We've got a URL and title; store it as new if it doesn't // already exist. Store it as list of lists, ready for // a setValues([][]) insert if (attachment.objectType == 'article') { if (! (attachment.url in list['old'])) { list['new'].push([attachment.url, attachment.displayName]); } } } }
At this stage, weāre ready to add any new URLs to the list in the sheet. Note that when we pushed onto the ānewā list, we pushed an array of the url and displayName. This is the ideal two dimensional array ([[a, b], [c, d], [...]) to specify as the value in theĀ setValues() call on a two dimensional cell Range. And useful if we want to follow the sage advice in āCommon Programming Tasksā on using batch operations where possible: we can add all the new URL info to the sheet in a single getRange() and setValues() call pair:
// Blammo! if (list['new'].length) { sh.getRange(lastRow + 1, 1, list['new'].length, 2).setValues(list['new']); } }
Now thatās the retrieveActivityUrls_() Ā function out of the way, letās just have a look at the helper function buildActivityListUrl_() that we called earlier. It takes three parameters: the ID of the person on Google+, the collection we want to retrieve (āpublicā in this case), and the API key. It uses a URL template in the ACTIVITYLISTURL constant and replaces the placeholders. It also adds the API key, and the XPath fields statement.
function buildActivityListUrl_(userId, collection, apiKey) { var actListUrl = ACTIVITYLISTURL; actListUrl = actListUrl.replace(/{userId}/, userId); actListUrl = actListUrl.replace(/{collection}/, collection); actListUrl += '?key=' + apiKey; actListUrl += '&fields=items/object/attachments(url,displayName)'; return actListUrl; }
That brings us to the end of this part in the series. At this stage we have covered the tasklist determination using a user interface and pulled the URLs posted on a Google+ activity stream, storing them in the sheet.
In the next part, weāll look at synchronising the URLs in the sheet with tasks on the chosen tasklist.
Stay tuned!
]]>Parts Overview
Tasks API
The availability of the Tasks API within the Google Apps Script context was announced recently on the Google Code blog. Using the Google APIs Discovery Service makes it easier to build client libraries for the myriad APIs available; this is what Google have done to make the BigQuery, UrlShortener, Prediction and Tasks APIs available for us in Google Apps Script. Collectively theyāre known as Google APIs Services.
Unlike the other services already available ā such as those pertaining directly to the Google Apps platform like Spreadsheet, Gmail, DocsList and Calendar ā you need to use the Google API Console to turn on these new APIs within the context of a project, agree to the terms & conditions, and note the courtesy call limits available to you.
You can see here a shot of the Tasks API selected for use within a project I created in the Google API Console, and a courtesy limit of 5000 calls per day. Check out a previous blog post ā[Automated Email-to-Task Mechanism with Google Apps Script](/blog/posts/2011/10/04/automated-email-to-task-mechanism-with-google-apps-script/)ā for more background on this Tasks API and the Google articleĀ ā[Integrating with Google APIs ā Creating a simple reading list](http://code.google.com/googleapps/appsscript/articles/google_apis_reading_list.html)ā for a step-by-step account of enabling the API itself Ā (called Tasks Services in Google Apps Script).Working with Tasklists and Tasks
So, what do we need to do with the Tasks Services? As you can gather from watching the screencast in the Part 1, we need to retrieve a list of existing tasklists, we might need to create a new tasklist, and we need to be able to add tasks to a specific tasklist. We also need to build a Ui component to present the list of the userās tasklists, so a tasklist can be chosen, plus an option to create a new tasklist.
Retrieving the Tasklists
Letās start with retrieving a list of tasklists. Ā While this is pretty simple, weāll encapsulate it in a function as weāll be calling it a couple of times within this example.
function getTasklists_() { var tasklistsList = Tasks.Tasklists.list(); return tasklistsList.getItems(); }
We use the Tasklists member of the Tasks class which gives us a TasklistsCollection class. We call call the list() method to retrieve a Tasklists object ā which represents a list of all the authenticated userās tasklists. Calling getItems() on this object gives us an array of Tasklist objects ā which is the list of tasklists that we need.
Building the Ui
Weāll need the list of tasklists to show in the Ui component. So letās look at building that Ui component next. Building user interfaces in Google Apps Script can appear somewhat daunting at first glance, but donāt worry ā itās actually very straightforward. You have the choice between building the Ui in code (by using Ui Services calls) or using a visual editor much like you might in other IDEs. This latter approach was announced and described in detail on the Google Apps Developer blog,Ā following this yearās Google I/O.
Weāll build our Ui in code. If you need an intro to this, have a look at the Google Apps Script āBuilding a User Interfaceā documentation.
We want to be able to display to the user a list of their existing tasklists so they can choose one, and also give them a chance to enter the name of a new tasklist instead. So we need a dropdown list (otherwise known as a listbox), a textbox, some text labels, and a button. This is what the end result should look like:
Itās showing the Ui title (āTask Listsā), some labels, a dropdown list with the two existing tasklists that the authenticated user has already, an empty textbox (behind the dropdown) where a new tasklist name can be entered and a button to which we can attach an event handler.Layout is achieved using Panels and Grids, both containers for elements. Here, weāll use a VerticalPanel, where the elements are arranged vertically, and a Grid, where we can arrange elements in a 2-dimensional way.
Schematically, this is what weāre going to do:
So, letās look at the code that builds this Ui. We start by getting a handle on the active spreadsheet (doc), and creating a new Ui application (app), giving it a title. At the end of this function weāll be passing the Ui application to the active spreadsheet to show.function taskListUi() { var doc = SpreadsheetApp.getActiveSpreadsheet(); var app = UiApp.createApplication(); app.setTitle('Task Lists');
Next, we create a vertical panel (panel), and a listbox (lb), both of which exist independently. We set a name for the listbox (āexistingListā) so we can refer to it later in the callback context. After using the getTasklists_() function described earlier, we fill the listbox with those tasklist names (or ātitlesā) retrieved.
// We'll have a grid and a button in this // vertical panel var panel = app.createVerticalPanel(); // Use a listbox to display a choice of existing tasklists var lb = app.createListBox(false); lb.setName("existingList"); var tasklists = getTasklists_(); for (var tl in tasklists) { lb.addItem(tasklists[tl].getTitle()); }
Once weāve got the listbox populated, itās time to create the grid (a 2 x 2 layout) and fill the cells with labels, the listbox, and a textbox. We give a name to the textbox (ānewListā) so we can refer to it later in the callback context, in the same way as for the listbox.
// Use the grid to layout the listbox, a textbox for a new list, // and some corresponding labels var grid = app.createGrid(2, 2); grid.setWidget(0,0, app.createLabel("Existing:")); grid.setWidget(0,1, lb); grid.setWidget(1,0, app.createLabel("Or new:")); grid.setWidget(1,1, app.createTextBox().setName("newList"));
Finally we have the button element. Simple enough, but we also need to add a click handler to it in the form of a serverClickHandler. This handler exists as a function in this same script: handleChooseButton_() which is defined after this. The important thing to notice here is that we create an independent serverClickHandler, give it some element context (in this case the grid element we created earlier) so that the element values are available in the context of the handling function, and then assign that handler as a click handler to the button element.
// The only button; handler will be linked to this button click event // Remember to add the grid contents to the callback context var button = app.createButton("Choose"); var chooseHandler = app.createServerClickHandler("handleChooseButton_"); chooseHandler.addCallbackElement(grid); button.addClickHandler(chooseHandler);
Once weāve created the button element and sorted out how the click event will be handled, itās time to put the Ui together. We add the elements one by one to the vertical panel: a label, the 2 x 2 grid, then the button. Then we add the actual panel to the app, hand it over to the active spreadsheet to be displayed, and let go!
// Put it all together and show it panel.add(app.createLabel("Select existing or create new list")); panel.add(grid); panel.add(button); app.add(panel); doc.show(app); }
Handling the Button Click
The handling of the click is performed by handleChooseButton_(), as determined by the call to createServerClickHandler() earlier. Letās examine handleChooseButton_() step by step.
We start by assuming that the user has chosen an existing tasklist ā we get the value from the listbox via its name within the parameter attribute of the event object passed to the function, i.e. e.parameter.existingList. Then again, if weāve got a value in the textbox representing the option to create a new tasklist, we create a new tasklist using the Tasks.newTaskList() method of the Tasks Services, and give that new tasklist the title that was specified in the textbox.
Note that setTitle() was called directly in a āchainā from newTaskList(), and the result assigned to the newTaskList variable. This is possible due to the way the Tasks API has been designed, with most TaskList methods returning the TaskList object itself; this is known as the ābeanā object.
function handleChooseButton_(e) { // Assume an existing list was chosen var selectedList = e.parameter.existingList; // But check for a new list being specified; if it as, create // a new task list if(e.parameter.newList != '') { selectedList = e.parameter.newList; var newTaskList = Tasks.newTaskList().setTitle(selectedList); Tasks.Tasklists.insert(newTaskList); }
Now weāve determined the chosen tasklist (either an existing one or a newly created one) we grab the complete list with getTasklists_() and have a look through them to find the corresponding tasklist id, which weāll need when we want to insert new tasks into that tasklist.
// Grab the list of tasklists, because we'll need the id var taskLists = getTasklists_(); var taskListId = -1; for(tl in taskLists){ if(taskLists[tl].getTitle() === selectedList) { taskListId = taskLists[tl].getId(); break; } }
Ok, weāve determined and retrieved the id for the chosen tasklist, so now itās time to save that info. Weāll do that by writing both the tasklist name and id into a cell; the tasklist name into the cell itself, and the id into the cellās comment. This is a common idiom and is quite useful ā you can store related information in a single cell, and donāt use up too much cell āreal estateā. The cell weāre going to use is stored as a constant: READINGLISTCELL; in my spreadsheet thatās cell D1.
// Record the list name and id var sh = SpreadsheetApp.getActiveSheet(); var cell = sh.getRange(READINGLISTCELL); cell.setValue(selectedList); cell.setComment(taskListId);
Once weāve stored the information, itās time for the handler to make sure the Ui is closed, and to acknowledge to the user that a selected list has been recognised. We do this by closing the active Ui application, and using the Spreadsheetās generic ātoastā mechanism to pop up a message.
// Close the Ui popup and display the name of the chosen list var app = UiApp.getActiveApplication(); app.close(); SpreadsheetApp.getActiveSpreadsheet().toast(selectedList, "Selected List", 3); return app; }
Hurray ā thatās the Ui component and the handler all taken care of!
Tune in next time when in Part 3 we look at retrieving information from the Google+ activity stream via the Google+ API, using nothing more than our trusty Google Apps Script HTTP client, UrlFetchApp.
]]>I used the same idea of a reading list, but added a Ui component to allow the user to select a task list interactively, and instead of using the UrlShortener API, I explored the relatively young Google+ API, in that I pulled in articles to read automatically from URLs posted by people on Google+.
Also, in revisiting some of the original reading list features, I tried to approach the coding differently, to be mindful of the advice in the āOptimising Scripts for Better Performanceā section of the āCommon Programming Tasksā guidelines.
Hereās a short screencast that shows the āReading List Mark 2ā² in action:
Iāll describe how everything is put together over the next few blog posts:
Stay tuned!
]]>At the end of the meetup, I suggested an example of something that would be really easy to put together using Google Apps Script, and very useful: a mechanism to convert incoming emails automatically into tasks.
You can of course convert an email into a task manually using the Gmail UI like this:
But rather than have to open Gmail, find the task email, select it and then choose More Actions -> Add to Tasks, I wanted a hands-off facility where I could, say from my work email, fire off a quick one-liner task that would be added to my list of tasks automatically, silently and without fuss.
With effective use of Gmailās filter facility, labels and a little bit of Apps Script using the Gmail Services, I was able to create a mechanism in the time it took to enjoy my morning coffee.
Building the Automated Email-to-Task Mechanism
Hereās how I saw it working:
Then I couldĀ fire off an email to qmacro+task@gmail.com from work, with the task 1-liner in the Subject, and have that task automatically show up on my task list. Ideal!
The Filter
Once you have the labels, create the filter. This is what the action part of my filter looks like:
Iām specifying that the email be assigned to the label ānewtaskā, that it should marked as read immediately, and not appear in the inbox. That way, I donāt get distracted by the noise of task emails in my inbox. If youāre wondering about the ānewtaskdoneā label, weāll get to that in a minute.
The Script Context
Now weāre all set up ā we can write the script to process the relevant emails, i.e. all those assigned the label ānewtaskā.
Start by creating a new Spreadsheet Ā ā the script can live attached to that. Add the text āProcessed tasksā to cell A1. Weāll use this to show how many tasks the script has processed. Use the menu option Tools -> Script editor to get to the Google Apps Script editor.
You can call your project āMail Managementā, or whatever you want:
The Script Code
Ok, letās run through the script step by step.
We start with a few constants: the name of the tasklist into which we want our new tasks inserted, and the two labels.
TASKLIST = "DJ's list";
LABEL_PENDING = "newtask";
LABEL_DONE = "newtaskdone";
Next we have a helper function getTasklistId_
which uses the Tasks Services from the new Google APIs Services in Apps Script. Youāll need to explicitly state you want to use the Google APIs Services from the File menu, which will lead you to a popup where you can switch on the Tasks API and use the Google API Console to create a project and generate an API key which youāll need. All of this is described in ample detail in a great article āIntegrating with Google APIs ā Creating a simple reading listā.
This getTasklistId_
function returns a tasklist ID for a given tasklist name ā in this case weāll be asking for the ID of the tasklist called āDJās listā. Itās early days for the Tasks API and there are a few oddities: In theory we should be able to use the simple API call :
Tasks.Tasklists.get(tasklistName)
but this is currently resulting in an error. So instead weāll grab a list of all the tasklists, and iterate over them looking for our tasklist name. Iāve suffixed the name of this function, and others in this script, with an underscore; this prevents them from showing up in the dropdown list of runnable functions at the top of the editor.
function getTasklistId_(tasklistName) {
var tasklistsList = Tasks.Tasklists.list();
var taskLists = tasklistsList.getItems();
for (tl in taskLists) {
var title = taskLists[tl].getTitle();
if (title == tasklistName) {
return taskLists[tl].getId();
}
}
}
Next we have another helper function addTask_
which will create a new task, given a string, and add that new task to a tasklist, given a tasklist ID. Note the separation of concerns ā a task is created independently of a tasklist, then inserted into that tasklist.
function addTask_(title, tasklistId) {
var newTask = Tasks.newTask().setTitle(title);
Tasks.Tasks.insert(newTask, getTasklistId_(tasklistId));
}
We then come to the definition of processPending_
, which does the bulk of the mechanismās work. This function gets a handle on each of the two labels we mentioned earlier (labels in the Gmail Services are one of three main classes, along with threads and messages). The idea is that we will process āpendingā emails assigned to the ānewtaskā label, and then switch the thread to the ānewtaskdoneā label so it wonāt get processed a second time. With a call to the getThreads() method of the pending label object, we get a list of threads. Weāre assuming that thereās only one email in each thread (task emails are separate and different each time), and so we grab the subject from the first message in each thread to use as the 1-liner task title, and use the addTask_ helper function to insert a new task into the tasklist.
Once this is done we remove the ānewtaskā label and assign the ānewtaskdoneā label to the thread.
Finally, we increment the āProcessed tasksā counter in the sheet, for a quick indication of how many email-to-task conversions have taken place.
function processPending_(sheet) {
var label_pending = GmailApp.getUserLabelByName(LABEL_PENDING);
var label_done = GmailApp.getUserLabelByName(LABEL_DONE);
// The threads currently assigned to the 'pending' label
var threads = label_pending.getThreads();
// Process each one in turn, assuming there's only a single
// message in each thread
for (var t in threads) {
var thread = threads[t];
// Grab the task data
var taskTitle = thread.getFirstMessageSubject();
// Insert the task
addTask_(taskTitle, TASKLIST);
// Set to 'done' by exchanging labels
thread.removeLabel(label_pending);
thread.addLabel(label_done);
}
// Increment the processed tasks count
var processedRange = sheet.getRange("B1");
processedRange.setValue(processedRange.getValue() + threads.length)
}
This last function, main_taskconverter
, is more a matter of personal style rather than necessity ā itās the main function that we will start the whole mechanism off with, and the function that weāll specify in the trigger so this script will run on a regular basis. We get a reference to the active spreadsheet, set the first sheet to be the active one (it usually is anyway) and call the processPending_
function.
function main_taskconverter() {
// Get the active spreadsheet and make sure the first
// sheet is the active one
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sh = ss.setActiveSheet(ss.getSheets()[0]);
// Process the pending task emails
processPending_(sh);
}
And thatās all there is to it!
Scheduling Regular Execution
We want this mechanism to run regularly in the background, so that it converts all incoming task emails to tasks without our intervention. So weāll use a trigger ā we can set up a time-driven event trigger so that the script ā via the main_taskconverter function, runs every hour.
With a coffee (and biscuit) down, I now have a very slick way of remembering things I have to do. Nice!
Hereās the script in its entirety, with comments.
]]>The Google Apps application set, combined with the universally accessible and always-on nature of Google Apps Script, with its access to a ridiculously large set of useful APIs (in the form of Services), makes an ideal platform for rich collaborative data workflow solutions that can be quickly prototyped, and built into robust, reliable and incredibly useful mashups for you and your users.
Iāve become a big fan of the technology, the platform and the approach, and Google are introducing more features all the time ā just this week access to API services have been introduced, starting with BigQuery, Prediction, UrlShortener and Tasks APIs.
]]>Last week I had the opportunity to attend an SAP Mentor Webinar on SAP Netweaver Gateway, entitled āGateway Consumptionā. Gateway is something I have a good deal of interest in, and have written about it in the past. The webinar was a fascinating hour filled with information about how consumption of data and services in your backend SAP systems will be facilitated with SAPās new Gateway product; the webinar included discussion of code generation for Xcode, libraries for BlackBerry apps, and of course Android. Not to mention web-native apps. And donāt forget ā SAP Gateway is not just for mobile! :-)
The whole Gateway consumption experience is fronted at the sharp (HTTP) end with well-known standards Atom & the Atom Publishing Protocol (APP), and the Open Data Protocol (oData). Whether theyāre also well-loved Iāll leave for you to decide.Ā Payloads are in XML or JSON (although again, some would say itās the ugliest and un-JSON-like JSON theyāve ever seen, but thatās another story). The Gateway system itself is an ABAP-stack SAP system, running the Internet Communication Manager, wrapped of course with our beloved Internet Communication Framework.
And thereās the thing.
While slowly but surelyĀ the promise of* lightweight over heavyweight*, simplicity over complexity, and open over proprietary protocols continues to be delivered, youāve got to admit thatās a heck of a lot of layers of stuff thatās already building up! Your application, on top of generated libraries, on top of oData, on top of APP, on top of HTTP, on top of ICF, in an SAP system. Gosh!
So what does the desperate enterprise hacker have to do? Walk strong! Learn to walk properly and steadily, before you can run. Stand firm upon this stack of technologies, and understand the fundamentals of the ICF, the core HTTP mechanisms that underpins everything.
Not only that, but sometimes, oData is too much! Sometimes, you just want a controlled but ad-hoc exposure of SAP functionality through a simple HTTP interface that you can connect to and interact with using curl! In a Unix style command pipeline! Is that heresy? I donāt care, I do it often! With text/plain! Yes! Sometimes you want to use the power of the ABAP development and debugging environment, the data dictionary and abstracted storage layer, and just whip up a data collection service in a coffee break, instead of trying to shoehorn records into a silly Access database using a batch script.
So.
If you want to understand the solid platform that Gateway and many other technologies are built upon, if you want to use that platform to build your own ānativeā HTTP based applications, if you want to differentiate yourself from the rest of the SAP developers who are rushing headlong into Gateway and HTTP, or if you just want to be able to stand steady on the shoulders of giants and confidently debug the core layers when things donāt go to plan, then get to know the ICF. More specifically, get to know it on my course! Itāll be fun, too!
GET /course/info HTTP/1.1 Accept: text/plain Host: omniversity.madlab.org.uk
200 OK Content-Type: text/plain Title: Web Programming with the SAP Internet Communication Framework When: Mon 5th and Tue 6th September Cost: Ā£300 Where: Madlab, 34-40 Edge St, Manchester M4 1HNĀ
]]>The start of Port Street Beer Houseās American Beer Festival starting Mon 27th June, until the beer runs out, is clearly an ideal time to review Torpedo, an IPA from one of Americaās most respected craft beer pioneers Sierra Nevada. On the second day of this wonderful festival, our favourite craft beer venue is buzzing with talk, tastings and trialling of an array of beers that would put any respectable beer sampling adventure at DEFCON 1.
So Sierra Nevadaās Torpedo it is. It sits there, arrogant and assertive in my PSBH glass, a golden copper colour, ready to give my senses a run for their money. From the moment the glass is lifted, the sharp spicy aroma hits the nose, with strong strains of citrus and pine cones. Suddenly Iām camping in northern California, lying prostrate with my face in the pine needle strewn earth.
If I had to pick three words to describe this bold, year-round IPA, they would be āhopā, āhopā and āhopā again. Copious amounts of hops are this Chico, CA breweryās trademark, and itās no more evident than in this 7.2% ABV package of liquid heaven. Moreover, itās whole-cone hops that Sierra Nevada uses to produce Torpedo. Concerned about long-term hop quality post-harvest, some brewers eschew the natural form in favour of a processed, pelletized version, but not Sierra Nevada. They have managed to consistently harness the full hop aroma from the whole-cone form using a mechanism they developed for dry hopping ā with a stainless steel device called the Hop Torpedo. And so the name. Magnum and Crystal hops are the backbone for this brew, and in the dry hopping stage the torpedo injects Citra, a relatively new US hybrid hop variety.
So. At the end of this review I find Torpedo to be bold and totally full of character. Thereās a huge bitterness thatās nicely balanced by a bready malt flavour. And the crisp dry finish makes me think my glass is very empty indeed. The bar beckons.
http://portstreetbeerhouse.co.uk/blog/2011/05/31/review-dark-star-saison-by-dj-adams
A thoroughly enjoyable time! And of course, I checked it in; fitting, as Untappd is celebrating its one millionth check in this week!
]]>The biggest problem, and the biggest enemy of SOA, appears to have sprung from within the SOA bubble itself. Hordes of cargo-cult ridden ERP architects and consultants have swept into organisations, egged on by respected analyst firms, and declared āSOA is the answer! Now, what is the question?ā Before detailed analysis of the challenge at hand, they appear, armed to the teeth with SOA white papers and acronyms, and plonk down their SOA scaffolding superstructure, proudly stating āWhatever solution we end up with must fit in that frameworkā. And so implementations get off on the wrong foot, noses are put out of joint, integrations are brittle by design, and costs shoot way past the budget, like an HTTP request tunneling to a solitary, unidentifable endpoint forever out of reach.
Whatās the answer? In my humble opinion, itās the **re-**coupling of archtecture with development. In a comment to Matthiasās post, Matt Harding mentions the concept of a āDevelopment Architectā. This resonates with me tremendously; I set my title on LinkedIn to include āCoding Architectā, which tries to convey a similar concept. Get the people who are thinking hard about architecture to think hard about development, and vice versa, and the correct and appropriate strategies will emerge.
]]>A dusky Saturday evening finds me in Port Street Beer House in Manchester's Northern Quarter. An escape from the Champions League final, and respite from the constant threat of cloudburst. A usual warm welcome and conversation turns to Dark Star's Saison, which their own website describes as "Nothing like what English beer should be". That sounded like a challenge, and a delightful 568ml later I'm tending to agree with them.
The name Saison (French: season) refers to a style of seasonal pale ales traditionally brewed in farmhouses in the French-speaking Belgium countryside for the farmers and field workers at harvest time. The brews were distinctive as each farmhouse produced their own version, but all were strongly hopped (for preservation) and low in alcohol (to keep the workers hydrated & refreshed, and the harvest on track).
This West Sussex brewery's take on the "Belgian-style Farmhouse Ale" has produced a very refreshing and even more drinkable pale ale. With a slightly spicy aroma, overtones of pepper, the beer sits there hazy, rather than cloudy, yellow, looking like a distant cousin of the fuller wheat beers from further east. Saaz, Styrian and Belgian Goldings hops are awakened by a Saison yeast which gives the brew a dry and refreshing taste. There's a distinctive citrus streak running right through the glass, and the whole experience is reminiscent of a Paulaner-with-lemon-wedge, but much more subtle and balanced. The flavours are alive right to the end, with even the last sip as dynamic as the first.
Saison is not an English beer, and nor is it what an English beer should be. It's a great modern take on a traditional style of ale, just as Dark Star is a modern take on a traditional and multi-faceted brewery. Just as it provided respite for the field workers from the harvest heat and toil, so Saison has provided me respite from the football frenzy and fall of rain. Would I have another? I'm already on my way to the bar.
To a large extent I agree with Johnās sentiments, especially on the context of the cost of education, and perhaps the beginning of the end of āvocationalā degrees. Competition in the graduate job market, cost of living, and tuition fees are all increasing at an alarming rate, and John is calling for universities to make their degree courses relevant to industry, in particular IT consulting, and for students to seriously consider career-orientated courses (rather than subjects they might otherwise wish to study).
The problem is that this drives us dangerously down the path of clone production. Often in my career have I come across graduates of computer-related degree courses, unable to think for themselves, unwilling to consider solutions that involve approaches beyond what theyāve already studied, and ā while having a tremendously impressive pedigree in, say, compiler design or even XML processing ā not able to translate their skills and knowledge into practical application thereof.
I graduated with a Classics degree, which was made up of Latin, Ancient Greek, Sanskrit and Philology. But beyond the questionable ability to translate Ovid into Ancient Greek, or understand how Phrygian influenced later language grammars, I graduated with the skills to think logically, work independently, think outside the box and, most importantly, to learn and assimilate new ideas and approaches and apply them to current problems. This particular set of skills is not specific to Classics by any means, but is a good illustration of soft skills that are wider and deeper than any particular vertical slice of IT.
Yes, I agree that students face serious problems in higher education, but letās not move towards a solution that denies the richness that a traditional non-tech degree affords.
]]>A: appengine.google.com ā unsurprising, as Iām a big fan of Googleās App Engine.
B: bbc.co.uk ā where I go to get the news, although mostly I listen toĀ Radio 4 via my Squeezebox Radio.
C:Ā coastandcountryholidays.co.uk ā Michelle and I are taking a holiday in Norfolk later this month.
D:Ā docs.google.com ā Iām a big Google Docs user.
E: enterprisegeeks.com ā where I go for some excellent ERP / SAP banter.
F:Ā flickr.com ā Iāve been on Flickr for as long as I can remember.
G:Ā google.co.uk ā well, duh!
H:Ā http://www.google.co.uk ā interesting! Isnāt using the scheme in the URL cheating?
I:Ā imdb.com ā weāre Lovefilm members, but I still use IMDB for film geekery.
J:Ā jsonformatter.curiousconcept.com ā JSON is my poison, and this excellent site is the sweetener.
K:Ā www.amazon.co.uk/gp/digital/fiona/manage/ref=docs_dim_box ā where I manage my Kindle. I think this is linked to K as I have a bookmark titled āKā pointing here.
L: linkedin.com ā essential!
M:Ā m.untappd.com ā even more essential! Also, perhaps more alarming, Untappd is the only site that appears more than once, apart from Googleās home page.
N:Ā natwest.com ā where I do some of my banking. National Westminster Bank.
O:Ā omniversity.madlab.org.uk ā The Manchester Digital Laboratoryās Omniversity. Excellent!
P:Ā pipetree.com ā my main domain.
Q:Ā qmacro.appspot.com ā not been here for a while, this was a general play area on App Engine.
R:Ā router ā my Vigor router, to do the occasional port management.
S:Ā slashdot.org ā old but still ālesenswertā.
T:Ā twitter.com ā Iām a fan of Seesmicās web client, but still use the mothership app for lookups and the like.
U:Ā untappd.com/user/qmacro ā ahem. Beer ahoy!
V: vmlu02:8080 ā one of my servers; a virtual machine running on a micro-desktop on the shelves behind me. This is a port where I have an App Engine dev server listening.
W:Ā www.google.co.uk ā three out of three!
X: nothing!
Y:Ā youtube.com ā I thought I was happy when the interweb was just text. But I was wrong.
Z:Ā zino:9000 ā zino is the micro-desktop that hosts vmlu02. Listening on port 9000 is my Squeezebox server.
]]>Hereās the post: āProject Gateway. A call to arms. Or at least to data.ā
]]>"Integrated Software. Worldwide." - that used to be the strapline for SAP's enterprise software system R/3. Before that, the mainframe predecessor R/2 was so menacingly monolithic that there was no strapline needed to underline the deep integration and the message that "everything your enterprise needs is inside this large, smooth-sided black object, with a precise ratio of 1:4:9". No light emanated from it, and no light could penetrate it, save for specialised forms of lasers running at a frequency of APPC/LU6.2 (look it up).
Of course, that was then, and this is now. SAP has slowly but surely turned the integration pattern inside out, and it is not uncommon for an enterprise's ERP landscape to have more SAP systems than you can shake a stick at. Or a laser gun. Want CRM? There's an SAP system for that. Want APO? There's an SAP system for that. Want Process Integration? There's an SAP system for that (to paraphrase a modern Apple saying). And all this time, enterprise data and functions -- your information and processes -- have been stored, cocooned, imprisoned inside that constellation of ABAP and Java runtime environments.
Ok, "imprisoned" is a little harsh. There have been, and remain, a myriad ways to invoke processes, pull data, exchange this, expose that. Remember the RFC software development kit (SDK)? Remember registering programs with the gateway process, programs that were written in C and looked almost exactly like the example code that came with that very SDK, with just a bit of custom stuff added by you to make it do what you wanted? How about the Internet Transaction Server, with its 'wgate' process that spoke Common Gateway Interface (CGI), and the so-crazy-it-deserves-respect dynpro-scraping 'agate' process, the only known program apart from SAPGUI itself to attempt to speak the mysterious DIAG protocol? WebRFC templates? What about the venerable SAP Business Connector, a rather handy toolbox of pipes, workflows and dynamic page generations which is still going strong in some corners even today?
No? Well how about Business Server Pages (BSPs)? Mix ABAP and markup in the style of ASP, JSP, DSP or whatever other *SP flavour you can think of, throw in a little extra complexity, and you have a pretty powerful and outward facing toolset. Still using BSPs? Of course, it's a trick question. You want to answer "yes", but you're supposed to answer "ah no, we've embraced the MVC philosophy and have gone all WebDynpro now". You might answer "what's that got to do with SAPGUI?". And I wouldn't hold it against you.
Whatever your poison (and I won't even attempt to cover the SOAP, SOA and Enterprise Service offerings because I, and more importantly you, dear reader, would be here all afternoon), over the years, there's a single truth that emerges when you consider all the tools and technologies past and present that have been made available to you to expose your business information and functions to a wider sphere of users and systems. You can do it, but you do it, inevitably, on SAP's terms. Proprietary protocols. Proprietary (and frankly bonkers) approaches, in some cases. The approaches are predominantly "inside-out". A lot of heavy lifting inside of the SAP system walls, then more stuff outside.
And that's just the server-orientated view. What about the clients? SAPGUI, anyone? How much has that actually changed, deep down, since the days of Windows 3.11? You can't fit SAPGUI in your pocket, either.
So. Here you are. With your most valuable business information and processes inside SAP. Not locked up, by any means. But you're prevented from grabbing and running with that information, those processes, in an agile way, because of the inertia caused by the sheer weight of SAP-specific technology between where your servers end and where your users start.
But it's 2011 and time for a change. A time for a call to arms. Or at least a call to data. SAP's announcement of Project Gateway at 2010's TechEd changes the landscape. In a big way. SAP's slow, inexorable, inevitable move towards open data protocols and standards is to be celebrated. And capitalised upon. What SAP is trying to do with Project Gateway is arguably a game changer in the sport of opening up enterprise data and functions. They have embraced and adopted such standards, protocols and approaches such as Atom, the Atom Publishing Protocol (APP), resource orientation (yes, related to Representational State Transfer!) and the Open Data Protocol (OData). From this perspective, if it works for Google and Google's customers, it can work too for SAP and SAP's customers!
Think about it: What Project Gateway intends to deliver is a smooth-edge platform for controlled access to resources in SAP. Yes, I used the word 'resources' deliberately there. And the intended delivery is via access from an outside-in perspective, too! Data and functionality exposed and ordered in terms of URLs. Payloads orientated along public and openly adopted MIME types such as Atom feeds and elements, and JSON. A uniform interface to that seething, writhing mass of enterprise engine parts.
What's the significance of all this? All of a sudden, the playing field is level for you to use the right tools, and the right teams, for the job. For example: Want to build a mobile app that exposes certain timesheet functions from HR? Use JQTouch, jQuery and PhoneGap, identify the right ways in through the Gateway, get your Javascript-savvy developers (what, you have none? Get some!) and away you go, build a web-native app and launch in weeks not months. Heck - change your mind, go wild and build a native iPhone app with Objective-C (you'll regret it! but that's another story) ā¦ and use the same Gateway resources and the same underlying application protocol - HTTP! Stop building, worrying about and being paralysed by custom and brittle chains of integration tech and start delivering apps -- small or large, single-use or long-lived -- to your users.
Project Gateway is almost upon us. What it is and how it will eventually work is important. But what's vastly more important is what it means, what it represents, and what direction SAP is taking.
And with Gateway's arrival, marshal your developers, because the data's already marshalled for you. Ask not what you can do for your data; ask what your data can do for you! Take back control of your data, your processes, your developments, your custom front ends & extensions and your loosely coupled integration.
As I started with references to 2001: A Space Odyssey, I'd like to end with a bad paraphrasing of Bowman's last message to Earth:
ALL THESE WORLDS
OF DATA AND FUNCTIONALITY
ARE YOURS EVEN
THAT BIT IN CO-PS THAT NOBODY USES
USE THEM TOGETHER
USE THEM IN PEACE
Originally published on the Bluefin Solutions website (where they disowned me because of this article ... that championed the introduction of what now drives and supports everything related to SAP's cloud activities from the ABAP platform).
]]>In moving to Chrome and installing the Delicious Tools extension, one thing I really missed from the Firefox-based add-on was the ability to set a simple configuration option to set the āMark as privateā checkbox on by default. There seemed to be a lot of forum-based discussion on making this work for the Chrome extension, but it seemed no solution was immediately evident. So I decided to investigate, and found out what I could do. This post is as much an aide memoire as anything else.
The Chrome extensions can be administered by entering chrome://extensions into the address bar. This is what you can see for the Delicious Tools extension, when you have the Developer Mode expanded:
There are a couple of interesting things that we can see:
A find and grep later, I find this background.html componentās home:
~/.config/google-chrome/Default/Extensions/ gclkcflnjahgejhappicbhcpllkpakej/1.0.4_0/
Itās not just integration mechanisms that can be built in a loosely-coupled way. Applications built with HTTP, HTML, CSS and Javascript are also, almost by definition, beautifully loosely coupled; on inspecting the Javascript source in this file, we see:
// Show delicious pop-up window addDelicious = function(conf) { var c = conf || {}, doc = c.document || document, url = c.url || doc.location, title = c.title || doc.title, notes = c.notes || '', w = c.width || 550, h = c.height || 550, deliciousUrl = c.deliciousUrl || "http://delicious.com/save?v=5&noui&jump=close&url=", fullUrl; fullUrl = deliciousUrl + encodeURIComponent(url) + '&title=' + encodeURIComponent(title) + '¬es=' + encodeURIComponent(notes); window.open( fullUrl, [...]
A simple addition of
fullUrl = fullUrl + "&share=no";
before the call to window.open() will add the query parameter āshare=noā to the Delicious URL that is requested, resulting in the HTML form being rendered with the āMark as privateā checkbox already ticked.
Result!
]]>Sure, SAP have been developing non-core software, services and processes for years now. But it hasnāt been until today that the realisation has truly hit home for me. The only team Iāve found thatās building anything in ABAP here is ā¦ ours. Ok, Gregor Wolf told me about some very interesting work on Webhooks last night which involved some ABAP coding, but that was probably more out of necessity rather than anything else. BPM is on everyoneās lips. Moreover, to speak of SAPās Java server offering is already passĆ© and almost uninteresting.
And then thereās River.
River is a project that has been collaboratively built by the SAP Labs teams in Israel and Palo Alto. I talked to Lior Schejter who told me more about it. Itās a platform-as-a-service offering thatās remotely related to Googleās App Engine (although with more UI) and that allows the development, customisation, hosting and running of āsmallā applications. Itās hosted on Amazon EC2 and uses Tomcat to serve. Applications are built in the flow-logic style of BPM, and consist of user interfaces (what I saw was Flash-based) with business logic controlling the processes in the back end. Thereās a UI builder, and the business logic can be built and modified either diagramatically or with Javascript, which runs on the server.
Even though River is arguably in beta right now, what I saw was very impressive. Itās also fair to say that there are a number of milestones that the team are working towards. Online editing and development is essentially a element right now. Thereās no source code repository integration or version control. Yes, I know what youāre saying, and I agree: River could learn and take from the fascinating and fabulous Bespin (now āSkyWriterā) project. In fact, thereās a loose connection already: at last yearās SAP TechEd in Vienna I got Bespin connected to, and checking in and out from, SAPās Code Exchange platform. Furthermore, offering the ability to debug Javascript than runs on the server is not a simple task (even Google Apps Script doesnāt have that yet, and developing and debugging for apps destined for Googleās App Engine is done locally using the SDK). Lior told me of a very interesting and so far successful approach to solving this problem: run and debug the Javascript locally and use a proxy for the River-specific API calls.
There is of course plenty more to say about River, as you can imagine. The project is very interesting and theyāre attempting to address hard problems and built a very current offering. But what struck me the most about River is the technologies theyāre using, and the audiences and customers that SAP are addressing. These are expanding all the time. Lior even related to me that it had been difficult for his team to find a person who knew ABAP, to help with some of the (minor!) experimental BAPI backend integration!
A far cry from the days of old. This is not your fatherās SAP.
]]>Expanding upon the concept of an earlier contextual project calledĀ Dashboard, Gmail contextual gadgets give a clear message that email, as a universal information carrier and workflow pipeline, is not only here to stay, but is being given a new lease of life as it plays a foundational role in Googleās enterprise scale application platform strategy. A Gmail contextual gadget enhances email messages by providing information or functionality that is relevant to the context of that email ā¦ right inside the email itself. Context is exposed by content extractors in the form of ācluesā in Gmail (akin to Dashboardās ācluepacketsā) and matched content is provided to the gadget at runtime.
Extractors, optional filters, and scope declarations (used by the installer of a gadget to decide whether to install or not based upon privacy and security requirements) are defined in a manifest, along with references to the gadgets themselves, via gadget spec files, that are to be triggered.
What makes these Gmail contextual gadgets even more attractive is the Google Apps Marketplace, where developers can make gadgets available, and consumers can use the āAdd it nowā button to start using them in their own domains.
Developing Gmail contextual gadgets is relatively straightforward, but there are a few things that might cause you to stumble, such as documentation (weāre early adopters!), cacheing issues and not being completely aware of what match information is provided.
Despite the advent of Wave and Buzz, itās obvious that Google sees, rightly in my opinion, a tremendous amount of value in the venerable email application, and I thought Iād take the opportunity to document my first attempt at enhancing the contextual experience with a Twitter-flavoured Gmail contextual āHello Worldā gadget.
Twitter User Info
āTwitter User Infoā is a Gmail contextual gadget that provides basic info about Twitter users whose Twitter handles appear in the email Subject line. In this example, the profile image and basic Twitter user info is shown for Joseph, whose Twitter handle @jcla1 appears in the Subject of the email from Michelle:
The contextual gadget appears directly below the email body, and starts with the title and description āTwitter ā User Infoā (defined in the gadget spec) and contains HTML showing the Twitter info.
Components and hosting
What are the components that make up this gadget? First of all, we need the manifest and the gadget spec itself. To support the dynamic creation of contextual content in the email, we will be using jQuery, not only because itās a fantastically useful and powerful library for manipulating web page content, but also because of Googleās intention to use Caja to provide a layer of protection for the user of Javascript-based apps. The jQuery library is listed as one of the development frameworks that will be compatible with Caja. Thereās also a tiny bit of CSS.
Beyond that, we will of course be making a call to one of the Twitter API endpoints, and calling upon one of my favourite HTTP tools PostBin, to dump Google gadget libary method return values for inspection.
While the manifest is uploaded to Google when you make your gadget available in the Marketplace, your gadget spec needs to be accessible online (so the gadget container can pull it in at the appropriate moment). There are many options for hosting content online, but for this experiment I decided to create a new App Engine application āqmacro-contextualā and host the gadget and CSS as static files there (Iām also storing the manifest there too). This might appear as overkill, but as I progress further into contextual gadget development, I will most definitely want to do some of the heavy app lifting outside of the actual gadget spec, and for this, App Engine is ideal.
Hereās part of the app.yaml file showing the handler declarations for the static manifest, gadget and CSS resources:
application: qmacro-contextual version: 1 runtime: python api_version: 1 handlers: - url: /manifests static_dir: manifests expiration: 1m - url: /gadgets static_dir: gadgets expiration: 1m - url: /css static_dir: css - url: .* script: main.py<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; line-height: 19px; white-space: normal; font-size: 13px;">Ā </span>
Note that Iāve specified an expiration period of 1 minute for the manifest and gadget spec directories. This is for development only, so that I can be sure that at least App Engine will serve up the resources with a very short shelf life, so that I can tweak the definitions and code and have them reloaded by the gadget container.
Incidentally, thereās also a URL query string parameter you can specify that causes gadget cacheing to be turned off ā just append ā?nogadgetcache=1ā³ to the Gmail URL and this should do the trick.
The Manifest
I followed the Developerās Guide to construct the manifest, which you can see here in full:
http://qmacro-contextual.appspot.com/manifests/twitter-user-info.manifest.xml
The interesting parts of the manifest which relate to Gmail contextual gadgets are the Extractor, Gadget and Scope declarations.
The Extractor declaration looks like this:
<!-- EXTRACTOR --> <Extension id="SubjectExtractor" type="contextExtractor"> <Name>Twitter IDs in Subject</Name> <Url>google.com:SubjectExtractor</Url> <Param name="subject" value=".*@[a-z]+.*"/> <Triggers ref="TwitterUserInfoGadget"/> <Scope ref="emailSubject"/> <Container name="mail"/> </Extension>
Each extractor (there can be more than one for any given manifest) is defined with an id and name and references a particular Extractor ID which does the work of pulling the info out of the email. Here weāre referencing google.com:SubjectExtractor, which is an extractor provided by Google for pulling out the Subject line. Google will be opening up opportunities for developers to build their own extractors if the pre-defined ones donāt provide what we need.
The google.com:SubjectExtractor is defined as returning one output field, @subject, which is made available to the gadget to do with as it wishes ā more on that later. It also has one scope defined, tag:google.com,2010:auth/contextual/extractor/SUBJECT, which must be linked with a scope definition in a later section of the manifest.
We can see the reference to the @subject output field in the tag. This is a filter definition, which says here that we only want the extractor to trigger the gadget if the email subject matches the given regular expression ā i.e. if it contains a Twitter handle. Clearly, we want to avoid triggering gadgets when thereās nothing for the gadget to do; not only to avoid unnecessary almost-empty gadget displays, but also for performance reasons; without a filter, this extractor would fire for every email you looked at. The filter is optional, but Google recommends that even if you want to match on every occurrence, you put an explicit catch-all regular expression ā.*ā to make that clear.
The reference to āTwitterUserInfoGadgetā points to the next declaration, that of the gadget itself:
<!-- GADGET --> <Extension id="TwitterUserInfoGadget" type="gadget"> <Name>Twitter User Info contextual gadget</Name> <Url>http://qmacro-contextual.appspot.com/gadgets/twitter-user-info.gadget.xml</Url> <Container name="mail"/> </Extension>
The id of the gadget, āTwitterUserInfoGadgetā is what is referred to in the
http://qmacro-contextual.appspot.com/gadgets/twitter-user-info.gadget.xml
and this is what will be requested by the Gmail contextual gadget container to pull in the gadget spec. Hereās part of an App Engine log record showing the gadget spec being fetched:
The name declared in this Gadget declaration (āTwitter User Info contextual gadgetā), along with the name in the Extractor declaration (āTwitter IDs in Subjectā) and the general name and description from elsewhere in the manifest, are text items that appear to the Google Apps domain administrator when selecting the gadget for installation, like this:
Finally, we have the Scope declaration, which was indicated in the Extractor declaration earlier. This is āemailSubjectā, and contains the scope URI defined for the extractor being used. There may be more than one scope for a given extractor; if this is the case, they must be each defined separately and explicitly.
This information appears during gadget installation, where the administrator can review what the gadget will access, and decide whether or not to proceed:
Once youāve defined your manifest, you must upload it as part of the overall Listing Information required to offer a gadget or an app on Google Apps Marketplace. You have to sign up to become a vendor with Google in order to do this. Itās free, as is the listing of unpublished test gadgets and apps, so you can experiment all you need to.
The Gadget Spec ā Declarations
Now weāve dealt with the manifest, itās time to turn our attention to the gadget spec. Remember that the gadget is triggered when we get a Subject line that contains one or more Twitter handles. If youāve developed a gadget before, for iGoogle, for example, this should be familiar to you. First we have the ModulePrefs section where we declare basic gadget information and the features that we require. Thereās a feature specific to Gmail contextual gadgets that we must declare here. Then we have the gadget code itself, in a CDATA section.
Hereās what the ModulePrefs section looks like:
<ModulePrefs title="Twitter" description="User Info" height="50" author="DJ Adams" author_email="dj.adams@pobox.com" author_location="Manchester"> <Require feature="dynamic-height"/> <Require feature="google.contentmatch"> <Param name="extractors"> google.com:SubjectExtractor </Param> </Require> </ModulePrefs>
The title and description in the module prefs shows up (āTwitter ā User Infoā) when the gadget is displayed at the bottom of the email. We define a height for the gadget which can be auto-adjusted later with the dynamic-height feature declared in this section too. A feature thatās specific to Gmail contextual gadgets, and that must be declared for all such gadgets, is google.contentmatch. In declaring this feature, you must list the Extractor id (or ids) that will be triggering this gadget.
The google.contentmatch feature gives us the facility we need to avail ourselves of the, ahem, content that was matched in this context. As you will see, we use the getContentMatches() method to do this.
The Gadget Spec ā Code
With the ModulePrefs declarations out of the way, we get to the Javascript that breathes life into our gadget. The Javascript is defined in the
<script type='text/javascript' src='http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js'></script> <script type="text/javascript"> // Expect subject as first element keyed by 'subject' matches = google.contentmatch.getContentMatches(); jQuery.post('http://qmacro-postbin.appspot.com/1jd620g', 'matches:' + JSON.stringify(matches)); var subject = matches[0]['subject']; // Only do something if we actually have a subject to work with if (subject) { // Pick out the twitter @handles and process them var handles = subject.match(/@[a-z0-9_]+/g); if (handles) { $('head').append('<link rel="stylesheet" href="http://qmacro-contextual.appspot.com/css/twitter-user-info.css" />'); for (var i = 0; i < handles.length; i++) { var user_resource = 'http://api.twitter.com/users/show/' + handles[i] + '.json?callback=?'; $.getJSON(user_resource, function(data) { jQuery.post('http://qmacro-postbin.appspot.com/1jd620g', 'userinfo:' + JSON.stringify(data)); var loc = ""; if (data.location) { loc = ' (' + data.location + ')'; } var tw_info = '<table border="0">' + '<tr>' + '<td>' + '<a href="' + data.url + '">' + '<img src="' + data.profile_image_url + '" />' + '</a>' + '</td>' + '<td class="userinfo">' + '<a href="http://twitter.com/' + data.screen_name + '">@' + data.screen_name + '</a>' + '<br />' + data.name + loc + '<br />' + data.description + '</td>' + '</tr></table>'; jQuery(tw_info).appendTo('body'); }); } gadgets.window.adjustHeight(100); } } </script><span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: small;"><span></span></span>
First, we pull in the jQuery library with a <script/>
tag, and then weāre off with our gadget code.
We use the google.contentmatch.getContentMatches() method to pull in the matches supplied to us by the Extractor. One of my favourite phrases is ālet the dog see the rabbitā ā letās have a look at the data, in this case. What does the getContentMatches() actually return? What does it look like? This is where the rather useful PostBin comes in to play. When we get the response from the call to getContentMatches(), encode it into a JSON string form with JSON.stringify() and bung the whole lot to a Postbin too see. Easy! Of course, this is only appropriate for development and debugging ā Iād remove it for a production gadget. By the way, Iām running my own instance of Postbin ā you can run you own instance too.
So from looking at what we get,Ā we can see that what weāre after is the value of the āsubjectā key in the first element of the matches array.
By definition, weāre only instantiated because there was a Twitter handle in the Subject. There could be more than one, of course. After collecting them into a handles array, weāre ready to process each one. First, though ā we pull in the CSS resource to the current document. The resource is served as a static file from the App Engine app:
http://qmacro-contextual.appspot.com/css/twitter-user-info.css
For each of the Twitter handles, we want to display some basic info, as shown in the screenshot earlier. To retrieve this, we make a simple call to the Twitter API. Specifically,
http://api.twitter.com/users/show/[Twitter-handle].json?callback=?
will give us a nice chunk of JSON with the information we want, like this:
{ "description":"Developer and Linux Expert", "screen_name":"Jcla1", "url":"http://www.pipetree.com/josephadams", "name":"Joseph Adams", "profile_image_url":"http://a1.twimg.com/profile_images/106288960/JosephWithLomaxCar_normal.jpg", "location":"Krefeld,Germany", ... }
Iād originally started with the jTwitter jQuery plugin but found that it didnāt quite do what I wanted, and in any case using the Twitter API from jQuery is straightforward anyway. But thank you uzbekjon for getting me started.
You can see from the code that Iām making more use of Postbin, by gratuitously dumping the results of the Twitter API call in there too. I like to see what Iām dealing with. Data::Dumper is my alltime favourite Perl module, if you hadnāt guessed.
Once we have the info from Twitter, itās just a simple matter of constructing some HTML, making use of the CSS via the āuserinfoā class, and appending that to the email.Ā Job done!
Iāve pushed my fledgeling qmacro-contextual App Engine project to Github, so you can take a look and create your own āHello Worldā Gmail contextual gadget.
Share and enjoy!
]]>Hooked!
Since then weāve been fans of the game of endless possibilities and ever changing scope and interest, and almost regulars at our local MTG store, Fan Boy Three on Newton St in Manchester.
So to educate myself in all thing Magic, I turned to MTGās official website, Wizards of the Coastās The Multiverse, and in particular, to their incredibly prolific set of column authors on the Daily MTG. More articleson design, deck construction, strategy and match reports than you could shake a Planeswalker card at.
But while I read a lot, the majority of it is on paper, in the bath, on the train, and soaking up the countless minutes lost at the start of every meeting, while you wait for people to get started, fail to get the projector working, fetch coffees or fiddle with the air conditioning. And on paper, the MTG articles are good, but for a novice like me, thereās something missing. The articles make lots of references to cards by name, and when reading online, thereās a nice popup of the card details so you can see what the author is talking about. But on paper?
So I had an itch to scratch. What I wanted was an accompanying printout of the cards mentioned in any given Daily MTG article. So when the author referred to Hedron Crab,Ā Baloth Woodcrasher or Oran-Rief, the Vastwood I would know what they were talking about.
I decided to use Google App Engine, and have my Python HTTP responder in the cloud. I created a very simple app āmtgcardinfoā, part of my Github-hosted scratchpad area gae-qmacro. Given the URL of an MTG article, the app uses urlfetch() to go and get it, parses out the card names, and produces an HTML response with a whole load of image references. Luckily the card detail popups in the articles are powered by Javascript and are great indicators of card names for anyone who cares to wield a regex to look for them.
And of course to glue it all together, I used a bookmarklet, so I could jump to the list of cards while directly in the article.
So if youāre interested, have a go: http://qmacro.appspot.com/mtgcardinfo.
The combination of App Engine, Python, HTTP and Javascript is rapidly becoming my new Swiss Army Knife of choice in the web-based online world. And the best thing? Iām teaching Joseph this stuff, and not only is he incredibly good at it, he loves it!
]]>Fans of REST and ROA (I'm one of them!) state many advantages over SOA, such as:
and subtly, but importantly, ROA is a lot more deserving of the word "Web" in the phrase "Web Services", as it works and flows with Web concepts, rather than, as in the case of SOA, fighting against them. SOA, incidentally, has been referred to as "CORBA with angle brackets", which is as funny as it is true.
REST concepts and ideas have been around SAP for quite a while now; there is of course some coverage here on SDN, such as:
"Forget SOAP - build real web services with the ICF" (me, Jun 2004)
"Real Web Services with REST and ICF" (me, Jun 2004, again) (content lost in SAP community migration)
"REST Web Services in XI (Proof of Concept)" (Wiktor Nyckowski, Mar 2009)
"A new REST handler / dispatcher for the ICF" (me, Sep 2009)
"VCD #16 - The REST Bot: Behind the scenes" (Uwe Fetzer, Sep 2009)
"REST-orientation: Controlling access to resources" (me, Sep 2009, again)
and recently:
"Put SOAP to REST using CE" (Werner Steyn, Nov 2009)
What especially delighted me was the coverage that REST concepts and ideas got at SAP TechEd 2009 in Vienna. Lots of people were talking about it, and mentioning it in presentations. Over half the DemoJam contestants mentioned REST too. I personally had a fascinating and very rewarding chat with SAP guru Thomas Ritter during RIA Hacker Night, and have also corresponded with the very knowledgable Juergen Schmerder. It seems that there is a lot of interest in REST at SAP.
But what about REST in SAP? How might you use it, be guided by it and ultimately build things with SAP NetWeaver technologies?
If you're interested, you might want to attend our upcoming Mentor Monday session
"REpresentational State Transfer (REST) and SAP - An Overview", on Monday 25th Jan at 13:00-14:00 PST.
You can get more information on the SAP Mentor Monday wiki page.
Hope to see you there!
]]>I think this is a great piece of advice, and something that needs to be underlined. To this end, I'd like to tell you a bit of a story.
In the early 1990s, I was working at Deutsche Telekom, in their data centre in Euskirchen, near Bonn, in Germany. I was part of the IBM mainframe and SAP Basis team that ran a fantastically huge SAP installation - around 10 parallel SAP R/2 systems that coordinated and shared data through a central system. The systems ran on IBM mainframes, and were powered by IMS DB/DC (DB for the database management layer, and DC for the transaction processing layer, for you young ones š) . They were the best of times. We hacked 370 assembler (yes, including qmacros!) while drinking coffee so strong the spoon would stand up, and wrote Rexx scripts & ISPF panel-based applications to heavy-lift SAP R/2 installations like they were LEGO constructions (and yes, Sergio, we had SBEZ!)
Being an IBM disciple at the time, I was aware how good the IBM documentation was. Seriously. I relished every opportunity to visit the documentation room, where I could diagnose any problem imaginable. Everything you ever wanted to know was there, if you knew where to look.
Anyway, there was a consultant, a veritable guru, Tomaschek I think his name was. He came and went at unearthly hours, drove a Mercedes with double glazing, and Knew Everything. Everything I could imagine knowing about running R/2 on IMS, with VSAM, and more. He knew. Of course, his experience counted for a lot, but I was eager to know how he had become so knowledgable, and so respected. So I asked him.
And he replied: "I read".
Since then, I've made it my business to read as much as I can, about the things I'm interested in. Anything and everything. Source code. Dry documentation. Articles. Books. Magazines. Weblogs. I have a stack of "to read" papers, ready to pop and take with me in the train, to meetings (how many meetings that you are invited to actually start on time?), into the bath. At my time at Deutsche Telekom, I set aside 10 minutes each day to read all the new OSS notes on my favourite areas (it was possible then!)
I feel I've gained a tremendous amount from what I've read. Some stuff I've read and not completely understood. Other stuff I've read and given up, bored. And yes, there's a lot of SAP documentation that could be better.
But if I can give one piece of advice, it's the same advice that I received from Mr Tomaschek all those years ago.
Read.
And then read some more.
]]>Using my new REST handler / dispatcher for the ICF, I can adopt a Resource Orientated Architecture (ROA) approach to integration. This gives me huge advantages, in that I can avoid complexity, and expose data and functions from SAP as resources - first class citizens on the web. From here, I can, amongst other things:
Moreover, I can easily divide up the programming tasks and the logic into logical chunks, based upon resource, and HTTP method, and let the infrastructure handle what gets called, and when.
This is all because what we're dealing with in a REST-orientated approach is a set of resources - the nouns - which we manipulate with HTTP methods - the verbs.
As an example, here's a few of the channel-related resources that are relevant in my Coffeeshop project; in particular, my implementation of Coffeeshop in SAP. The resource URLs are relative, and rooted in the /qmacro/coffeeshop node of the ICF tree.
Resource | Description | Method | Action |
---|---|---|---|
/qmacro/coffeeshop/ | Homepage | GET | Returns the Coffeeshop 'homepage' |
/qmacro/coffeeshop/channel/ | Channel container | GET | Return list of channels |
POST | Create new channel | ||
/qmacro/coffeeshop/channel/123/ | Channel | GET | Return information about the channel |
POST | Publish a message to the channel | ||
DELETE | Remove the channel |
(For more info on these and more resources, see the Coffeeshop repository.)
This is all fine, but often a degree of access control is required. What if we want to allow certain groups access to a certain resources, other groups to another set of resources, but only allow that group, say, to be able to read channel information, and not create any new channels? In other words, how do we control access following a resource orientated approach - access dependent upon the noun, and the verb?
Perhaps we would like group A to have GET access to all channel resources (read-only administration), group B to have GET and POST access to a particular channel (simple publisher access) and group C to have POST access to the channel container and DELETE access to individual channels (read/write administration)?
Before looking at building something from scratch, what does standard SAP offer in the ICF area to support access control?
When you define a node in the ICF tree, you can specify access control relating to the userid in the Logon Data tab:
(image lost in early SAP community platform migration)
This is great first step. It means that we can control, on a high level, who gets access generally, and who doesn't. Let's call this 'Level 1 access'.
You can also specify, in the Service Data tab, a value for the SAP Authorisation field ('SAP Authoriz.'):
(image lost in early SAP community platform migration)
The value specified here is checked against authorisation object S_ICF, in the ICF_VALUE field, along with 'SERVICE' in the ICF_FIELD field.
[O] S_ICF
|
+-- ICF_FIELD
+-- ICF_VALUE
This is clearly a 'service orientated' approach, and is at best a very blunt mechanism with which to control access.
As well as being blunt, it is also unfortunately violent. If the user that's been authenticated does have an authorisation with appropriate values for this authorisation object, then the authorisation check passes, and nothing more is said. But if the authenticated user doesn't have authorisation, the ICF returns HTTP status code '500', which implies an Internal Server Error. Extreme, and semantically incorrect - there hasn't been an error, the user just doesn't have authorisation. So, violent, and rather brutal. Then again, service orientation was never about elegance :-).
Clearly, what the SAP standard offers in the ICF is not appropriate for a REST approach to integration design. (To be fair, it was never designed with resource orientation in mind).
What we would like is a three-level approach to access control:
Level 1 - user authentication: Can the user be authenticated, generally? If not, the HTTP response should be status 401 - Unauthorised. This level is taken care of nicely by the ICF itself. Thanks, ICF!
Level 2 - general resource access: Does the user have access, generally, to the specific resource? If not, the HTTP response should be status 403 - Forbidden.
Level 3 - specific resource access: Is the user allowed to perform the HTTP method specified on that resource? If not, the HTTP response should be status 405 - Method Not Allowed. As well as this status code, the response must contain an Allow header, telling the caller what methods are allowed.
This will give us an ability to implement a fine-grained access control, allowing us to set up, say, group access, as described earlier.
Clearly, we're not going to achieve what we want with the SAP standard. We'll have to construct our own mechanism to give us Levels 2 and 3. But, SAP standard does offer us a couple of great building blocks that we'll use.
Why re-invent an authorisation concept, when we have such a good one as standard? Exactly. So we'll use the standard SAP authorisation concept.
So we'll create an authorisation object, YRESTAUTH, with two fields - one for the method, and one for the (relative) resource. This is what it looks like:
[O] YRESTAUTH
|
+-- YMETHOD HTTP method
+-- YRESOURCE resource (relative URL)
We can then maintain as many combinations of verbs and nouns as we like, and manage & assign those combinations using standard SAP authorisation concept tools. Heck, we could even farm that work out to the appropriate security team! Then, when it comes to the crunch, and the ICF is handling an incoming HTTP request, our mechanism can perform authorisation checks on this new authorisation object for the authenticated user associated with the request.
One of the most fantastic things about the generally excellent ICF is the ability to have a whole stack of handlers, that are called in a controlled fashion by the ICF infrastructure, to respond to an incoming HTTP request. The model follows that of Apache and mod_perl, with flow control allowing any given handler to say whether, for example, it has responded completely and no further handlers should be called to satisfy the request, or that it has partially or not at all been able to respond, and that other handlers should be called.
So for any particular ICF node that we want to have this granular 3-level access control, what we need is a pluggable handler that we can insert in the first position of the handler stack, to deal with authorisation. Like this:
(image lost in early SAP community platform migration)
As you can see, we have the main coffeeshop handler, and before that in the stack, another handler, Y_AUTH, to provide the Levels 2 and 3 access control. So when an HTTP request comes in and the ICF determines that it's this node ([/default_host]/qmacro/coffeeshop) that should take care of the request, it calls Y_AUTH first.
Y_AUTH is a handler class just like any other HTTP handler class, and implements interface IF_HTTP_EXTENSION. It starts out with a few data definitions, and identifies the resource specified in the request:
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
data:
l_method type string
, l_is_allowed type abap_bool
, lt_allowed type stringtab
, l_resource type string
, l_resource_c type text255
, l_allowed type string
.
* What's the resource?
l_resource = server->request->get_header_field( '~request_uri' ).
* Need char version for authority check
l_resource_c = l_resource.
Then it performs the Level 2 access check - is the user authorised generally for the resource?
* Level 2 check - general access to that resource?
authority-check object 'YRESTAUTH'
id 'YMETHOD' dummy
id 'YRESOURCE' field l_resource_c.
if sy-subrc <> 0.
server->response->set_status( code = '403' reason = 'FORBIDDEN - NO AUTH FOR RESOURCE' ).
exit.
endif.
If the authority check failed for that resource generally, we return a status 403 and that response is sent back to the client.
However, if the authority check succeeds, and we pass Level 2, it's time to check the specific combination of HTTP method and resource - the verb and the noun. We do this with a call to a simple method is_method_allowed() which takes the resource and method from the request, and returns a boolean, saying whether or not the method is allowed, plus a list of the methods that are actually allowed. Remember, in the HTTP response, we must return an Allow: header listing those methods if we're going to send a 405.
* Level 3 check - method-specific access to that resource?
l_method = server->request->get_header_field( '~request_method' ).
translate l_method to upper case.
call method is_method_allowed
exporting
i_resource = l_resource
i_method = l_method
importing
e_is_allowed = l_is_allowed
e_allowed = lt_allowed.
* If not allowed, need to send back a response
if l_is_allowed eq abap_false.
concatenate lines of lt_allowed into l_allowed separated by ','.
server->response->set_status( code = '405' reason = 'METHOD NOT ALLOWED FOR RESOURCE' ).
server->response->set_header_field( name = 'Allow' value = l_allowed ).
So we send a 405 with an Allow: header if the user doesn't have authorisation for that specific combination of HTTP method and resource. (The is_method_allowed() works by taking a given list of HTTP methods, and authority-checking each one in combination with the resource, noting which were allowed, and which weren't.)
Finally, if we've successfully passed the Levels 2 and 3 checks, we can let go and have the ICF invoke the main handler for this ICF node - Y_DISP_COFFEESHOP. In order to make sure this happens, we tell the ICF, through the flow control variable IF_HTTP_EXTENSION~FLOW_RC, that while our execution has been OK, we still need to have a further handler executed to satisfy the request completely:
* Otherwise, we're golden, but make sure another handler executes
else.
if_http_extension~flow_rc = if_http_extension~co_flow_ok_others_mand.
endif.
endmethod.
And that's pretty much it!
To finish off, here are some examples of the results of this mechanism.
(image lost in early SAP community platform migration)
In the first call, the wrong password is specified in authentication, so the status in the HTTP response, directly from the ICF, is 401. This is Level 1.
In the second call, the user is authenticated ok, but doesn't have access generally to the /qmacro/coffeeshop/ resource, hence the 403 status. This is Level 2.
In the third call, we're trying to make a POST request to a specific channel resource. While we might have GET access to this resource, we don't specifically have POST access, so the status in the HTTP response is 405. In addition, a header like this: "Allow: GET" would have been returned in the response. This is Level 3.
I hope this shows that when implementing a REST approach to integration, you can control access to your resources in a very granular way, and respond in a symantically appropriate way, using HTTP as designed - as an application protocol.
]]>If you're not directly familiar with the ICF, allow me to paraphrase a part of Tim O'Reilly's Open Source Paradigm Shift, where he gets audiences to realise that they all use Linux, by asking them whether they've used Google, and so on. If you've used WebDynpro, BSPs, the embedded ITS, SOAP, Web Services, or any number of other similar services, you've used the ICF, the layer that sits underneath and powers these subsystems.
One of my passions is REpresentational State Transfer (REST), the architectural approach to the development of web services in the Resource Orientated Architecture (ROA) style, using HTTP for what it is ā an application protocol. While the ICF lends itself very well to programming HTTP applications in general, I have found myself wanting to be able to develop web applications and services that not only follow the REST style, but also in a way that is more aligned with other web programming environments I work with.
An example of one of these environments is the one used in Google's App Engine. App Engine is a cloud-based service that offers the ability to build and host web applications on Google's infrastructure. In the Python flavour of Google's App Engine, the WebOb library, an interface for HTTP requests and responses, is used as part of App Engine's web application framework.
Generally (and in an oversimplified way!), in the WebOb-style programming paradigm, you define a set of patterns matching various URLs in your application's "url space" (usually the root), and for each of the patterns, specify a handler class that is to be invoked to handle a request for the URL matched. When a match is found, the handler method invoked corresponds to the HTTP method in the request, and any subpattern values captured in the match are passed in the invocation.
So for instance, if the incoming request were:
GET /channel/100234/subscriber/91/
and there was a pattern/handler class pair defined thus:
'^/channel/([^/]+)/subscriber/([^/]+)/$', ChannelSubscriber
then the URL would be matched, an object of class ChannelSubscriber instantiated, the method GET of that class invoked, and the values ā100234' and '91' passed in the invocation. The GET method would read the HTTP request, prepare the HTTP response, and hand off when done.
For a real-world example, see coffeeshop.py (part of my REST-orientated, HTTP-based publish/subscribe (pubsub) mechanism), in particular from line 524 onward. You can see how this model follows the paradigm described above.
def main():
application = webapp.WSGIApplication([
(r'/', MainPageHandler),
(r'/channel/submissionform/?', ChannelSubmissionformHandler),
(r'/channel/(.+?)/subscriber/(.+?)/', ChannelSubscriberHandler),
(r'/message/', MessageHandler),
(r'/distributor/(.+?)', DistributorWorker),
[...]
], debug=True)
wsgiref.handlers.CGIHandler().run(application)
This model is absolutely great in helping you think about your application in REST terms. What it does is help you focus on a couple of the core entities in any proper web application or service ā the nouns and the verbs. In other words, the URLs, and the HTTP methods. The framework allows you to control and handle incoming requests in a URL-and-method orientated fashion, and leaves you to concentrate on actually fulfilling the requests and forming the responses.
So where does this bring us? Well, while I'm a huge fan of the ICF, it does have a few shortcomings from a REST point of view, so I built a new generic handler / dispatcher class that I can use at any given node in the ICF tree, in the same style as WebOb. Put simply, it allows me to write an ICF node handler as simple as this:
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
handler( p = '^/$' h = 'Y_COF_H_MAINPAGE' ).
handler( p = '^/channel/submissionform$' h = 'Y_COF_H_CHANSUBMITFORM' ).
handler( p = '^/channel/([^/]+)/subscriber/submissionform$' h = 'Y_COF_H_CHNSUBSUBMITFORM' ).
handler( p = '^/channel/([^/]+)/subscriber/$' h = 'Y_COF_H_CHNSUBCNT' ).
handler( p = '^/channel/([^/]+)/subscriber/([^/]+)/$' h = 'Y_COF_H_CHNSUB' ).
dispatch( server ).
endmethod.
The handler / dispatcher consists of a generic class that implements interface IF_HTTP_EXTENSION (as all ICF handlers must), and provides a set of attributes and methods that allow you, in subclassing this generic class, to write handler code in the above style. Here's the method tab of Y_DISP_COFFEESHOP, to give you a feel for how it fits together:
The classes that are invoked (Y_COF_H_* in this example) all inherit from a generic request handler class which provides a set of attributes and methods that allow you to get down to the business of simply providing GET, POST, PUT and other methods to handle the actual HTTP requests.
Here's an example of the method list of one of the request handler classes:
One interesting advantage, arguably a side-effect of this approach, is that you can use nodes in the ICF tree to āroot' your various web applications and services more cleanly, and avoid the difficulties of having different handlers defined at different levels in the child hierarchy just to service various parts of your application's particular url space.
I'd like to end this weblog post with a diagram that hopefully shows what I've been describing:
If you're interested in learning more, or sharing code, please let me know. I'm using this for real in one of my projects, but it's still early days.
For more information on the coffeeshop mechanism, checkout the videos in this playlist:
Update 01/05/2012 I've re-added images to this post that were lost when SDN went through the migration to the new platform. This project is now called ADL ā Alternative Dispatcher Layer and is on the SAP Code Exchange here: https://cw.sdn.sap.com/cw/groups/adl
Update 09/09/2020 Added a link to the coffeeshop playlist
]]>The offerings are slightly different ā for example, while EC2 is bare virtual hardware, App Engine is a web application platform in the cloud. But they all have similar pricing arrangements, based generally on uptime or CPU time, I/OĀ and storage.
Does this seem familiar to you? It does to me, but then again, I did just turn 0x2B this month. In 1988 I was working in the Database Support Group at a major energy company in London, looking after the SAP R/2 databases, which were powered by IMS DB/DC, on MVS ā yes, IBM big iron mainframes. I still look back on those days with fond memories.
In reviewing some 3rd party software, I wrote a document entitled āBMC Softwareās Image Copy Plus: An Evaluationā. BMCās Image Copy Plus was a product which offered faster image copies of our IMS DB (VSAM) databases. (Image Copy Plus, as well as IMS, is still around, over 20 years on! But that has to be the subject of another post).
One of the sections of the evaluation was to compare costs, as well as time ā by how much would the backup costs be reduced using BMCās offering?
And have a guess on what the cost comparison was based? Yes. CPU time, I/O (disk and tape EXCPs) and actual tapes.
Everything old is new again.
]]>I got my Wave Sandbox account a week or so ago, and have had a bit of time to have a look at how robots and gadgets work ā the two main Wave extension mechanisms. To get my feet wet, I built a robot, which is hosted in the cloud using Google App Engine, another area of interest to me, and the subject of this weblog entry. I used Python, but there's also a Java client library available too. You can get more info in the API Overview.
What this robot does is listen to conversations in a Wave, automatically recognising SAP entities and augmenting the conversation by inserting extra contextual information directly into the flow. In this example, the robot can recognise transport requests, and will insert the request's description into the conversation, lending a bit more information to what's being discussed.
The robot recognises transport requests by looking for a pattern:
trkorr_match = re.search(' (SAPKw{6}|[A-Z0-9]{3}Kd{6}) ', text)
In other words, it's looking for something starting SAPK followed by six further characters, or something starting with 3 characters, followed by a K and six digits (the more traditional customer-orientated request format). In either case, there must be a space before and a space following, to be more sure of it being a 'word'.
How does it retrieve the description for a recognised transport request? Via a simple REST-orientated interface, of course :-) I use the excellent Internet Communication Framework (ICF) to build and host HTTP handlers so I can expose SAP functionality and data as resources in a uniform and controlled way. Each piece of data worth talking about is a first class citizen on the web; that is, each piece of data is a resource, and has a URL.
So the robot simply fetches the default representation of the recognised request's 'description' resource. If the request was NSPK900115, the description resourceās URL would be something like:
http:port/transport/request/NSPK900115/description
Once fetched, the description is inserted into the conversation flow.
]]>To give you a bit of background, Iām an SAP veteran of 22 years ā starting out with R/2 version 4.1d in 1987, moving through R/3 in the mid-90ās and on to Enterprise and beyond. But this is the first time Iāve studied SAP Business ONE in any detail. So while I have a lot of experience of SAPās traditional products, Iām approaching SAP Business ONE, and āSAP Business ONE Implementationā more as the potential owner of a small business.
I certainly havenāt been disappointed. āSAP Business ONE Implementationā is written āfor technically savvy business owners, entrepreneurs and departmental managersā. And I think by and large the book does a great job of reaching out to and connecting with exactly that audience. I was expecting the book to be a fairly technically orientated implementation how-to. But it is more than that. It takes you from business first principles, connecting well at the level of sales, delivery, inventory, warehousing, manufacturing and other business challenges. It explains how SAP Business ONE is designed to address those challenges, and guides you through installation, implementation and some configuration of the system. Once the basics have been established, it moves further to cover project planning, reporting and analysis, business process analysis, customer relationship management, logistics & supply chain management, contract management, and ends up addressing, albeit briefly, more complex reporting tools and topics, data migration, and electronic commerce.
The book has fewer than 300 pages. A book that addresses the areas that this book does could easily be twice that size. But thatās where this book does well. Itās an approachable, undaunting and really rather good introduction to running your business with SAP Business ONE. The writing style is very easygoing, and informative without being patronising. There are plenty of examples, and all the screenshots youād need. It doesnāt try to be a reference book. It does try to be a sort of hybrid guide to solving the business and technical challenges of running a small or medium sized company using SAP software, and I would say that it succeeds.
If youāre a small business owner considering stepping up and taking control of your business with SAP Business ONE, if youāve already got SAP Business ONE and want to explore more application features at a high level, or if even if (like me) youāre an SAP hacker wanting to learn about what SAP Business ONE can do, then you could do a lot worse than grab a copy of this book.
]]>In Craigās Friday Morning Report yesterday, I suggested:
13:32 qmacro: get SAP hosting to send alternative āSTOLEN!ā images to rogue referrer ā Google āimage theft apacheā for examples
and straight after the conference call finished, I thought Iād demo how that could be done. I implemented such a mechanism for images on an SDN blog post of mine, images that just happened to be hosted on my machine. I wrote about how that was done in a weblog post:
Dealing with ā#blogtheftā from SAPās Developer Network
This morning, @thorstenster alerted me to the fact that SAP have now implemented this for images hosted here on SDN:
Now thatās a great reaction! Kudos to the SAP Community Network hackers who look after the servers here. To implement something like that in such a short space of time and on the production servers ā¦ I take my hat off to you folks. Well done.
]]>Lots of discussion is taking place how best to deal with this. One way (and Iām posting it as a blog entry as much for my memoryās sake as anything else) is to conditionally rewrite requests for images. Iām using Apache and therefore the mod_rewrite extension is my tool of choice.
It just so happens that there are a couple of screenshots in a recent SDN blog entry of mine āA return to the SDN community, and a touch of Javascriptā and these images are hosted on my own server.
So as a little test, I can control the requests for these images, rewriting those requests so that a different image is served depending on the requestās referrer ā the URL of the page that the images are referenced on with an <img />
tag.
So with some mod_rewrite voodoo in a local .htaccess
file:
RewriteEngine On
RewriteCond %{HTTP_REFERER} ^http://www.sap-abap4.com
RewriteBase /qmacro/x
RewriteRule ^SdnPageTitle(Fixed|Broken)_small.jpg$ StolenContent.png [L]
I can send a StolenContent.png
image, if the referrer is from the rogue site.
The result of the rewrite is that when viewed on SDN, the blog entry looks fine, and the screenshot images look as theyāre supposed to:
But when the images are used on www.sap-abap4.com, they will appears differently:
So there you have it. Itās not a complete solution to the problem by any means, but it at least will alert unsuspecting readers of that website to what's happening (if youāre testing yourself, you might have to refresh the pages in your browser, as it will probably have cached the first version of each image). Perhaps the SAP community network team can apply this technique for the images hosted on SDN.
]]>Consider for a moment what this command line of the future might look like. More and more people are online. More and more people are permanently connected, whether it be through DSL, cable, or 802.11 technology. And more and more of these people are communicating. Talking. Having conversations. In addition to email and Internet Relay Chat, or IRC, the (relatively) new kid on the block, Instant messaging (IM), is playing a huge part in facilitating these conversations. And in the same way that itās common for us to have a command prompt or three sitting on our graphical desktop, itās also becoming common to have chat windows more or less permanently open on the desktop too.
But when thinking of IM, why stop at conversations with people? The person-to-application (P2A) world isnāt the exclusive domain of the Web. Bots, applications or utilities that have their interface as a projection of a persona into the online chat world, are a great and fun way to bring people and applications together in a conversational way.
Interacting with a bot is the same as interacting with a person: type something to it and it replies. And whatās more, because of the similarities between a classic command-line prompt and that of a chat window, where youāre talking with a bot ā both scenarios are text-based ā interaction with a bot is scriptable.
Forward to the present.
Just the other day, @davemee and @technicalfault alerted me to @manairport, Manchester Airportās online persona on Twitter, obviously yet another chat-style interface. You can interact with it via direct messages (DMs). You follow it, it will follow you back, and youāre away.
me: d manairport be7217
manairport: Received request for information: be7217 manairport: Status of 17:40 flight BE7217 to Dusseldorf departing T3: Scheduled 17:40
Nice and useful!
And then just this morning, I read a weblog post on SDN entitled āSAP Enterprise Service and Google Waveā. In it, the author talks about connecting Google Wave (you guessed it, yet another chat-style interface, amongst other things) with SAP, in particular enterprise āservicesā. In the short demo, order information from an SAP system is retrieved in a conversational way. The concept is great. The obvious issue with whatās shown in the demo (and I know itās only a proof of concept) is that the bot responds with a data structure dump of information. What weāre looking for is something more, well, consumable by humans. Smaller, more distinct and addressable pieces of information that can be returned and be useful.
But what was more telling, at least to me, were the difficulties he described in connecting to the complex Enterprise Service backend in SAP:
āā¦ find the webservice ā¦ create a proxy ā¦ I did have some problems with calling the ES ā¦ On Appengine there are some limitations on what you can call of Java classes ā¦ From an architectural point Iām not real proved of the solutionā¦ā
Hmm. Why does architecture have to be complex? Using Enterprise Services, using SOA, is more complex than it needs to be. Thereās a reason why the web works. Thereās a reason why Google designed App Engineās backend infrastructure (including asynchronous task queues) in a simple HTTP-orientated way. Thereās a reason why the Wave robot protocol is based on simple HTTP mechanisms. Thereās a reason why mechanisms like PubSubHubBub and Webhooks are based on HTTP as an application protocol. Because simple works, and it works well.
Letās come back to the āsmaller, more distinct and addressableā issue. If we let ourselves be guided by a Resource Orientated Architecture (ROA) approach, rather than a Service Orientated Architecture (SOA) approach, we end up with simpler application protocols, flexible, reliable and transparent integration, and pieces of information that are addressable ā and usable ā first class citizens on the web. This is Twitterās killer feature.
Enterprises suffer enough with complexity paralysis. We should endeavour to embrace the design of HTTP as an application protocol (which is what Iām doing with Coffeeshop), rather than fight against it.
]]>http://www.youtube.com/watch?v=NhAWH2-Quuk
In this shorter screencast, I continue on from where I left off ā viewing the message detail resource in the web browser. I use conneg to request that same resource in JSON instead of HTML, and show how the JSON representation can be easily parsed, and the data reused, further along the pipeline.
]]>This time, the coffeeshop instance Iām using is one running on Googleās App Engine cloud infrastructure ā on appspot.com.
http://www.youtube.com/watch?v=TI48cdpWOBg
In the screencast, I also make use of Jeff Lindsayās great Postbin tool for creating the recipient resources for the Subscribers. It was originally created to help debug Webhooks, but of course, a Subscriber is a sort of Webhook as well. (Postbin runs on App Engine too!).
]]>However, I must call him on this small statement:
āXMPP is way too complicated for any normal human to deployā
Compared to what? Iām getting the idea that heās referring to āsimplerā mechanisms such as HTTP or SMTP servers. Simpler? Has Anil modified a sendmail config file recently?
These days setting up an XMPP server is pretty straightforward. Then again, I am perhaps somewhat biased :-)
]]>ālets you debug web lets you debug web hooks by capturing and logging the asynchronous requests made when events happen. Make a PostBin and register the URL with a web hook provider. All POST requests to the URL are logged for you to see when you browse to that URL.ā
The article also shows a very simple pubsub āHello, Worldā script, postbin.rb, that nicely demonstrates the basic features of Watercoolr ā another HTTP-based pubsub mechanism.
So I thought Iād write the equivalent to postbin.rb, this time demonstrating the same features in Coffeeshop. This way, we can see how things compare. Itās in Python, but thatās neither here nor there.
import httplib, urllib, sys
hubconn = httplib.HTTPConnection('localhost:8888')
hubconn.request("POST", "/channel/") channel = hubconn.getresponse().getheader('Location') print "Created channel %s" % channel
hubconn.request("POST", channel + "subscriber/", Ā urllib.urlencode({'resource': sys.argv[1]})) subscriber = hubconn.getresponse().getheader('Location') print "Added subscriber %s" % subscriber
while True: Ā print "Post message:" Ā msg = sys.stdin.readline() Ā hubconn.request("POST", channel, msg) Ā message = hubconn.getresponse().getheader('Location') Ā print "Message published: %s" % message
Iāve added some print statements to show whatās going on, and to highlight the HTTP resources created and utilised.
Hereās a sample execution:
python postbin.py http://www.postbin.org/1a5m8w0 Created channel /channel/1/ Added subscriber /channel/1/subscriber/2/ Post message: Hello, Webhooks World! Message published: /channel/1/message/ahFxbWFjcm8tY[...]RgDDA Post message:
This message appears in the PostBin bucket as expected. Nice!
As well as showing how useful PostBin is, I hope this demonstrates how the basic features of Coffeeshop work, and perhaps more importantly, shows you that the REST-orientated approach is straightforward and works well.
]]>http://www.youtube.com/watch?v=1E_1B8TD6Kw
The screencast shows the creation of a channel, the addition of a couple of subscribers to that channel, the publishing of a message to that channel, and the subsequent delivery of that message to the subscribers. I draw attention to the use of the browser-based part of the implementation, and to the asynchronous nature of the message distribution (I had to do this anyway, as on the App Engine SDK development server, tasks are not executed automatically ā you have to start them manually in the admin console).
]]>There seems to be a growing interest in pubsub and webhooks; one recent article in particular ā āThe Pushbutton Web: Realtime Becomes Realā conveys a lot of the ideas behind these concepts.
With coffeeshop, entities ā Channels, Subscribers and Messages ā are resources, with URLs. You interact with entities using the appropriate HTTP methods. The implementation, being HTTP, is both browser-based (human-facing), and agent-based (program-facing). You can navigate the resources with your web-browser. You can interact with the resources with cURL, POST, or your favourite HTTP library.
Hereās a simple example:
Create Channel: > echo "Test Channel" | POST -Se http://giant:8888/channel/ POST http://giant:8888/channel/ --> 201 Created Location: /channel/5/ > # Add to Channel a new Subscriber with > # a callback resource of http://atom:8081/subscriber/alpha > **echo "name=alpha&resource=http://giant:8081/subscriber/alpha" ** > | POST -Se http://giant:8888/channel/1/subscriber/ POST http://giant:8888/channel/1/subscriber/ --> 201 Created Location: /channel/1/subscriber/2/ > # Publish a Message to the Channel > echo "hello, world" | POST -Se http://giant:8888/channel/1/ POST http://giant:8888/channel/1/ --> 302 Moved Temporarily Location: http://giant:8888/channel/1/message/ahFxbWFjcm8tY29mZmVlc[...]RgIDA
As you can see from this example, POSTing to the Channel container resource
/channel/
creates a new Channel, POSTing to the Channel 1 subscriber container resource
/channel/1/subscriber/
creates a new Subscriber, and POSTing to the Channel 1 channel resource
/channel/1/
creates a new Message, which is delivered to the Channelās Subscribers. The resource returned to a Message POST is that Messageās unique address
/channel/1/message/ahFxbWFjcm8tY29mZmVlc[...]RgIDA
where the details of that Message, including the Delivery status(es), can be seen.
For more information on the resources and methods, have a look at the ResourcePlan page.
Iām using Google App Engineās Task Queue API to have the Messages delivered to their (webhook-alike) endpoints asynchronously.
The code is early and rough, and available on github. You can download it and try it out for yourself locally or create an app on App Engine cloud domain appspot.com. Iāll probably publish a public-facing instance of this implementation in the next few days. All comments and feedback appreciated.
One last thing: I know of at least a couple of HTTP-based pubsub implementations: pubsubhubbub, and Watercoolr. Both are great, but for me, the former is a little complex (and ATOM-orientated), whereas the latter I thought could be more RESTian in its approach (hence coffeeshop).
]]>My oh my, how things have changed and progessed! Weāve seen the rise and rise of Open Source, the rise and fall of SOA, and the incredible improvements in connectedness and social collaboration in SAP events such as Sapphire & TechEd. Excellent.
Some things havenāt changed so much, though. Iām reading SDN in earnest again ā especially the weblog posts. And guess what? The use of frames in SAP portal technology is still hampering basic usability. A particular case in point is bookmarking; I canāt usefully or easily bookmark a weblog post without some cutānāpaste gymnastics, because the page title is always the same: āSAP Network Blogsā. It should be the entry-specific weblog post title, so you donāt end up with 1001 bookmarks that you canāt tell apart.
Not to worry. A couple of Greasemonkey Javascript lines later, in the form of sdnpagetitle.user.js, (in the sdnpagetitle github repository) and things are fixed!
Funny, my last post before this one was OssNoteFix script updated for Greasemonkey 0.6.4 and Firefox 1.5 too!
Share and enjoy, and hereās to the next 6 years š
]]>As Bruno showed in the announcement, this is what the REST connector looks like:
It will take whatever values it receives in the title, description and link input fields on the left hand side of the connector, and construct a piece of JSON which it then sends in an application/x-www-form-urlencoded format as a data=
So if we pass the values āDJās Weblogā into the title, āReserving the right to be wrongā into the description, ā/ā into the link fields, and pass āhttp://example.org/bucket/ā into the serviceUrl field, the following HTTP request is made on the http://example.org/bucket/ resource like this:
POST /bucket/ HTTP/1.1 Content-Length: 218 Content-Type: application/x-www-form-urlencoded Host: example.org Accept: */* data=%7B%22items%22%3A%5B%7B%22title%22%3A%22DJ%27s+Weblog%22%2C%22description %22%3A%22Reserving+the+right+to+be+wrong%5Cn%22%2C%22link%22%3A%22http%3A %5C%2F%5C%2Fwww.pipetree.com%5C%2Fqmacro%5C%2Fblog%5C%2F%22%7D%5D%7D
(whitespace added by me for readability).
When decoded and pretty-printed, that message body looks like this
data=```
<code class="block" id="output">{
"items":[
{
"title":"DJ's+Weblog",
"description":"Reserving+the+right+to+be+wrong",
"link":"/"
}
]
}```
This is what your app gets to process.
Bruno said that the format was chosen to be compatible with the Yahoo! Pipes Web Service Module, and it sure is ā look at this example from the Web Service Module documentation:
data={ "items":[ { "title": "First Title", "link": "http://example.com/first", "description": "First Description" }, { "title": "Last Title", "link": "http://example.com/last", "description": "Last Description" } ] }
And what about those three output fields on the right hand side of the REST connector? Well, if your app returns a response with JSON in the body ā this time not as a name/value pair, but as pure JSON ā like this:
{ "items":[ { "title": "The response!", "description": "Long text description of the response", "link": "http://example.org/banana/" } ] }
then the workflow can continue and you can connect those values in the corresponding title, description and link output fields as input to further connectors.
Happy tarpiping!
]]>Back in the day, I talked about, wrote about and indeed built interconnected messaging systems based around the idea of a message bus, that has human, system and bot participation. The fundamental idea was based around one or more channels, rooms or groupings of messages; messages which could be originated from any participant, and likewise filtered, consumed and acted upon by any other. I wrote a couple of articles positing that bots might be the command line of the future.
Using my favourite messaging protocol, I built such a messaging system for an enterprise client. This system was based around a series of rooms, and had a number of small-but-perfectly-formed agents that threw information onto the message bus, information such as messages resulting from monitoring systems across the network (ādisk space threshold reachedā, āSystem X is not respondingā, āFile received from external sourceā, etc) and messages from SAP systems (āSales Order nnn receivedā, āTransport xxx releasedā, āPurchase Order yyy above value z createdā, etc). It also had a complement of agents that listened to that RSS/ATOM-sourced stream of enterprise consciousness and acted upon messages they were designed to filter ā sending an SMS message here, emailing there, re-messaging onto a different bus or system elsewhere.
So what does this have to do with Twitter? Well, Twitter is a messaging system too. And Twitterās ātimelineā concept is similar to the above message groupings. People, systems and bots can and do (I hesitate to say āpublishā and āsubscribe toā here) create, share and consume messages very easily.
But the killer feature is that Twitter espouses the guiding design principle:
and everything is available via the lingua franca of todayās interconnected systems ā HTTP. Timelines (message groupings) have URLs. Message producers and consumers have URLs. Crucially, individual messages have URLs (this is why I could refer to a particular tweet at the start of this post). All the moving parts of this microblogging mechanism are first class citizens on the web. Twitter exposes message data as feeds, too.
Even Twitterās API, while not entirely RESTful, is certainly facing in the right direction, exposing information and functionality via simple URLs and readily consumable formats (XML, JSON). The simplest thing that could possibly work usually does, enabling the āsmall pieces, loosely joinedā approach that lets you pipeline the web, like this:
dj@giant:~$ GET http://twitter.com/users/show/qmacro.json | Ā perl -MJSON -e "print from_json(<>)->{'location'},qq/n/" Manchester, England dj@giant:~$
None of this opaque, heavy and expensive SOA stuff here, thank you very much.
And does this feature set apply only to Twitter? Of course not. Other microblogging systems, notably laconi.ca ā most well known for the public instance identi.ca ā follow these guiding design principles too.
Whatās fascinating about laconi.ca is that just as a company that wants to keep message traffic within the enterprise can run their own mail server (SMTP) and instant messaging & presence server (Jabber/XMPP), so also can laconi.ca be used within a company for instant and flexible enterprise social messaging, especially when combined with enterprise RSS. But thatās a story for another post :-)
]]>So I was thinking about doing something useful with Apacheās access log, more than what I already have with the excellent Webalizer. Inspired (as ever) by Jon Udellās āongoing fascination with Delicious as a user-programmable databaseā, I decided to pipe the access log into a Perl script and pull all the Google search referrer URLs that led to /qmacro/CV.html. For every referrer URL found, I grabbed the query string that was used and split it into words, removing noise. I also made a note of the top level domain for the Google hostname ā a very rough indication of where queries were coming from.
But rather than create a database, or even an application, to analyse the results, I just posted the information as bookmarks to Delicious (after a simple incantation of *perl -MCPAN -e āinstall Net::Deliciousā *- just what I needed, thanks!).
Delicious is a database, and by its very nature and purpose has a flavour that lends itself very well to loosely coupled data processing and manipulation. Itās about URLs and tags. Itās about adding data, replacing data, removing data. Basic building blocks and functions. Every item in the database has, and is keyed by, a URL, and as such, every item is recognised and treated as a first class citizen on the web. Even the metadata (tag information) is treated the same.
So what did I end up with? Well, for a start, I have a useful collection of referring CV search URLs, the collection being made via a common grouping tag ācvsearchkeywordsā that I assigned to each Delicious post in addition to the tags derived from the query string.
I also have a useful analysis of the search keywords, in the list of āRelated Tagsā ā tags related to the common grouping tag. I can see right now for example that beyond the obvious ones such as ācvā, popular keywords are abap, architect and developer. Whatās more, that analysis is interactive. Deliciousās UI design, and moreover its excellent URL design, means that I can drill down and across to find out what keywords were commonly used with others, for example.
That collection, and that analysis, will grow automatically as soon as I add the script to the logrotate mechanism on the server. That is, of course, assuming people remain interested in my CV!
And my favourite referrer search string so far? āHow to write a CV of a DJā :-)
]]>When I see the first book on SAP hit the bookstores, itās time to move on :-)
In those days there were no books on SAP, and I was still in shock from receiving SAP documentation properly printed and bound ā in the early days we had SAP install guides on green and white striped fanfold paper from daisywheel printers, with sentences literally half in German, half in English.
How things have changed. Beyond the SAP Developer Network, which I can proudly say I had a hand in forming and nurturing, Iāve just seen a video on YouTube by Jon Reed on how to find and follow SAP people on Twitter! Iāve also just added myself to the SAP Affinity Group. A long way from SAP-R3-L!
Perhaps itās time to rebuild Planet SAP?
]]>I couldnāt wait, however, and thought Iād have a bit of fun building an HTTP connector. I donāt have access to Tarpipeās sources, so I had to go a roundabout route. Tarpipe has a Mailer connector, which enables emails to be sent from within a workflow. So I built a very simple email-to-HTTP-POST mechanism ātarbridgeā. This way, you can use the Mailer connector to send an email like this:
Recipient: tarbridge+<token>@pipetree.com
Subject: the URL to POST to and an optional content-type
Body: the payload of the HTTP POST
and an HTTP POST will be made to the URL specified. Youāll even get an email reply with the HTTP response.
Hereās an example workflow that receives an email containing something to bookmark in Delicious. It uses the Delicious connector, and also makes an HTTP POST to a little test application (running on a local devserver version of the excellent Google AppEngine, fwiw) via tarbridge.
The Subject of the email contains the URL to make the HTTP POST to. By default the Content-Type will be set to application/x-www-form-urlencoded, but you can override this by specifying a different content type (here Iāve specified text/plain) as a second parameter in the Subject.
The addressee of the email is ātarbridge+
The body of the email is whatās send as the payload in the HTTP request.
So sending this email to the Tarpipe workflow above:
From: DJ Adams dj@pipetree.com To: bury69xxxx@tarpipe.net Subject: http://blog.tarpipe.com Tarpipe blog
results in this Delicious entry:
and this email sent, via the Mailer connector, to the tarbridge mechanism:
To: tarbridge+token@pipetree.com Subject: http://www.pipetree.com:8888/feed/ text/plain From: tarpipe mailer <mailer@tarpipe.net> http://blog.tarpipe.com http://del.icio.us/url/95948a42d8777b46278d4da333345473
which in turn results in an HTTP POST being made like this:
POST /feed/ HTTP/1.1 User-Agent: tarbridge/0.1 libwww-perl/5.812 Host: www.pipetree.com:8888 Content-Type: text/plain [...] http://blog.tarpipe.com http://del.icio.us/url/95948a42d8777b46278d4da333345473
The result of the HTTP POST is emailed back like this:
Subject: Re: http://www.pipetree.com:8888/feed/ text/plain To: DJ Adams <dj.adams@pobox.com> From: tarbridge+token@pipetree.com
HTTP/1.0 201 Created Date: Fri, 24 Apr 2009 10:06:55 GMT Location: http://www.pipetree.com:8888/feed/test-feed-1/agtmZWVkYnVpbGRlc[...] [...]
So if you were really crazy you could even feed that response back into the Tarpipe loop, using a second workflow (hmm, Tarpipe could do with a string parsing connector too :-)
The tarbridge mechanism is just a little Perl script thatās triggered via Procmail. Iām running Ubuntu on pipetree.com so it was just a question of configuring Postfix to use Procmail for delivery, and writing a .procmailrc rule like this:
:0 c | ~/handler.pl 2>> ~/tarbridge.log
If youāre interested in trying this out using my (pipetree) instance of this tarbridge, please email me and Iāll set you up with a token. Usual caveats apply. And remember, this is only in lieu of a real HTTP connector which I hope is coming soon from Tarpipe!
]]>I first read about Tarpipe from Curt Cagleās "Analysis 2009". In turn, Curt points to Jeff Barrās post which describes the concept and the implementation very well. Itās a fascinating concoction of Web 2.0 services and visual programming (in the style of Yahoo! Pipes), and in its beta infancy has that great āwow, imagine the full potential!ā feel to it.
Hereās an example of what Iāve been playing around with. With my phone ā and with the Google G1 phone itās so easy ā I can snap a picture of the beer Iām drinking, and email that picture to a Tarpipe workflow, along with the name of the beer in the subject line and a list of tags rating the beer in the body.
The workflow uses the existing Tarpipe connectors to:
All in the space of a few clicks and drags! Hereās a shot of that workflow (with a couple of connectors partially obscured ā itās a known bug in Tarpipe):
But whatās more fabulous: Tarpipe has been ideal for my son Joseph to start up with programming, with me. And he finds it really interesting. Visual, direct feedback, using and connecting things and services he understands. Gone are the days of
10 PRINT "HELLO WORLD"
20 GOTO 10
on black and white low-res displays.
After explaining a few concepts, Joseph was totally up and away, building his first workflow which is pretty impressive! (Iām a biased, proud dad of course :-) And now weāre off looking at Yahoo! Pipes too, and heās asking how we can link the two services together.
Hello, new programming world.
]]>The power of HTTP, and the voodoo of mod_rewrite, allow me to fix things. Inserting these lines into the relevant .htaccess files does the trick:
RewriteRule ^index.(xml|rdf)$ /feed/atom/ [R=301,L]
RewriteRule ^xml$ /feed/atom/ [R=301,L]
Now the bots are redirected to this weblogās shiny new feed. And Iāll try not to change the URL again :-)
]]>Whatās more, my son Joseph is online now too, complete with blog, identi.ca & Twitter accounts, and more!
Anyway, Iāve got myself a local copy of WordPress, and am slowly retrieving my past with the help of The Wayback Machine. Itās a slow and not entirely painless process, but Iām getting there. Iām doing a month at a time, and am up to Jan 2003. Nothingās properly categorised or tagged yet, nor are all the links working perfectly. There are even some posts that arenāt properly datestamped yet! More importantly, I havenāt yet put the mod_rewrite magic in place to reduce the 404s that Iām seeing in my HTTP access log.
Watch this space.
]]>put the OSS number and note title in the pageās title (and therefore the browser window/tab too) made OSS note numbers in the text of the OSS note into clickable links removed the dreadful frames so you can, e.g. bookmark the notes
You can read more about it in Hacking the SAP service portal to make OSS notes better or just watch the screencast of how it works.
Since then, new versions of Firefox (1.5) and Greasemonkey (0.6.4, for Firefox 1.5) have been released. Greasemonkeyās security model has changed, and OssNoteFix stopped working. Well, this weekend I finally found a couple of tuits and got round to updating the script, which is now focused on running with these releases of Greasemonkey and Firefox (if you havenāt upgraded, do so now!).
So without further ado, OssNoteFix 0.2 is available for installation (see below).
For those of you who are interested, and / or want to be confused, hereās some background info. Of course, you can look at the code to see how it works and what changes were made, and even modify it to suit your own purposes, because itās Open Source.
Share and enjoy!
// OssNoteFix
// version 0.1 BETA!
// 2005-05-18
// Copyright (c) 2005, DJ Adams
// OssNoteFix
//
// ==UserScript==
// @name OssNoteFix
// @namespace http://www.pipetree.com/qmacro
// @description Make OSS note pages more useable
// @include https://*.sap-ag.de/*
// ==/UserScript==
//
// --------------------------------------------------------------------
//
var textnodes, node, s, newNode, fnote;
// This is the URL to invoke an OSS note. Ugly, eh?
var linkurl = "<a href='https://service.sap.com/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=$1'>$1</a>";
// Right now, an OSS note number is 5 or 6 consecutive digits,
// between two word boundaries. Should be good enough for now.
var ossmatch = /\b(\d{5,6})\b/g;
// Act upon the 'main' framed document which has a form 'FNOTE'
// and the title 'SAP Note'.
if ((fnote = document.FNOTE) && document.title.match('SAP Note')) {
// Get stuffed, evil frames!
if (top.document.location != document.location) {
top.document.location = document.location;
}
// Make a useful document title from the OSS note number,
// found in the FNOTE form's _NNUM input field, and the
// OSS note title (which is in the first H1 element.
var h1 = document.getElementsByTagName('h1')[0];
var heading = h1.firstChild.data;
heading = heading.replace(/^\s*(.+?)\s*$/, "$1");
document.title = fnote._NNUM.value + " - " + heading;
// Make the plain text references to OSS notes into a href links
// pointing to their home in http://service.sap.com
textnodes = document.evaluate(
"//http://text()",
document,
null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
null);
for (var i = 0; i < textnodes.snapshotLength; i++) {
node = textnodes.snapshotItem(i);
s = node.data;
// Got a match? Make it into a link
if (s.match(ossmatch)) {
newNode = document.createElement('div');
newNode.innerHTML = s.replace(ossmatch, linkurl);
node.parentNode.replaceChild(newNode, node)
}
}
}
]]>
Today Obie Fernandez points to some hopelessly weak arguments against scripting / dynamic languages ā¦ from the father of Java himself, James Gosling.
Is Goslingās post just a moment of madness, or a sign of hopeless desperation?
]]>Anyway, Iām glad to say that the conversation is still continuing, as it needs to. Shai hasnāt responded yet, but I wonāt hold my breath. There have been (other) SAP employees that have given their comments, which I value greatly. If youāve got something to add, donāt forget to add your comment to the post! There are a handful of critical debates for people to have in the SAP world. Critical for SAP, and critical for us. Open Source is one of those debates.
Update: Discussion about Open Source covers the current top three weblog posts on SDN right now. That said, Shai still hasnāt responded.
]]>I wrote a little piece for the OāReilly Radar on how I got into computers. It brought back lots of memories, a strong one of which was the hours, days, weeks I spent on my Acorn Atom. I have my original Atom in storage, so I turned to MESS for help in getting an emulated Atom booted up. Ahhh, even now the font and cursor size are whisking me back in time ā¦
]]>Rather than focus on some of the worrying remarks that others have already commented upon (intellectual property socialism, innovation, and so on), Iād like to take one part that deals with source code, as thatās been my bread and butter for the last 18 years of working with SAP software.
Shai is understandably keen to see that his comments are not misrepresented ā see I LOVE Open SourceāReally!, so I took the time to transcribe exactly what he said in the interview. What follows is from between 35:40 and 37:00 of the interviewās MP3 file, when he responds to a rather general question about Open Source. The response deserves some analysis.
So we analyse Open Source a lot, in the, you know. Most people donāt know it about SAP but we are one of the first Open Source and one of the worst hit Open Source company [sic] in the world.
Worst hit? What does that mean? Itās difficult to tell, because it doesnāt really make sense, so I can only assume itās either general FUD (equating Open Source to an undefined but undesirable situation) or a taste of whatās to come later on in his response. I suspect the latter.
That said, letās give SAP its dues; Iāve long carried the flag for SAP for making (most of) the source code to R/2 and R/3 available ā see my comment to Visiting SAP NetWeaver Development Nerve Center for example. But donāt get out the champagne yet ā¦
We shipped all of our applications to all of our customers āsource openā. So the processes that you get from SAP, you get the source of the processes.
To a large extent, thatās true. Of course, it depends how you define āapplicationā. Source code for the business applications, in the form of ABAP (and assembler in R/2 as well) is available. But source code to to the kernel, and certain parts of the Basis, err sorry āWeb Application Serverā, system is not.
And youāre allowed to modify them, which causes the worst disaster in our ecosystem because every single one of our customers decided that āthatās a great idea, letās go modify the sourceā. And when they get the next version, they go āwell, what do I do with all my modifications?ā.
Err, excuse me? So this is perhaps what the āworse hitā FUD earlier was about. Disaster? Far from it, Shai, far from it. In my not so humble opinion, a major part of SAPās success was precisely because of the Open Source nature of the application code it delivered to the customer. Shai distinguishes two levels of āopen sourceā ā a āread-onlyā level for debugging, and a āread-writeā level for modifications. So letās go with that and address each level in turn:
āRead-onlyā ā one of the reasons SAPās support departments didnāt get as swamped as they might with customer questions (stemming from, for example, incomplete documentation) is because the customer was able to look at the code, debug what was going on, and work out for himself what was supposed to be happening. And rather than having to contact SAP to ask for custom modifications, in many cases they could simply copy the code into their own namespace and make the modifications they needed.
āRead/writeā ā far more important than āread-onlyā, this allowed customers to not only modify the code to do what they wanted, but also to fix code from SAP that was broken. Not only that, but they could then send the fixes back to SAP to be incorporated into the next put level / hot package / service release. SAP benefitted (and continues to do so) enormously from this angle.
I remember even back to the late 1980s making a major change (well, rewrite) to an asset management batch program in R/2, for which we had of course the source ā in this case, 370 assembler. The problem had been one of performance, and we had the author of the program visiting us from Walldorf. After my changes, the program ran orders of magnitude faster, and the chap (rightly) took the code changes back with him to SAP. This is just a single example. Iāve lost track of the countless fixes I and my colleagues have supplied SAP with over the last 18 years. I donāt begrudge SAP these fixes at all; after all, theyāre programmers too (although SAP support these days leaves me somewhat cold, but thatās another story).
So the benefit ā to SAP and to customers ā of having read/write access to the source code is HUGE. As someone who has wrestled with SAP software for this length of time, I canāt stress that enough.
And so thereās a, in our industry thereās a very interesting balance that you need to keep; thereās certain things that you need ā itās almost the difference between what happens in the CPU and what happens outside the CPU for Intel. You donāt touch the transistors inside the CPU because, you know, you want to make sure your divisions always work correctly.
Ok, nothing really to comment on here, except to say that the parallel between source code and electronics, while on the surface seemingly reasonable (both āhigh-techā and ācomputingā related), is in fact fairly inappropriate.
Weāre going back into a model where weāre going to take some of that code that was open in the past and put it more into a closed box and put web services, well defined, documented service interfaces to that, and then say above that, you get open. Above that you get open models, you get open source, you get everything you want in order to modify.
WHOA.
Hold on there a second. Out of everything that Shai said in his answer about Open Source, this is (to me) the most worrying. Let me repeat what he just said: āā¦ take some of that code that was open in the past and put it more into a closed boxā. Letās just make sure we understand what he said. Itās fairly clear cut, especially as the sentence that follows pretty much closes the deal: āAbove that you get open models, open sourceā. Above the closed boxes. SAP is going to take some of the source thatās been open, and close it. Remove the open access to it.
I donāt want to appear alarmist, but this is alarming in the extreme. Software that SAP has delivered āsource openā in the past, will be delivered āsource closedā in the future? Well, thatās what he said. And weāre seeing that already today. Letās step out of the assembler and ABAP world for a second, and into SAPās J2EE world. Hands up those of you already frustrated with SAP only delivering software in compiled classes, without the Java source? Right. This is already happening.
This, then, is a potential watershed in the SAP world. Whether you agree with Locke, Searls, et al., business is a conversation. SAPās business has been delivering applications, in the form of software, to its customers. And those customers have taken part, to their and SAPās great benefit, in a conversation at the software level.
So SAP, and Shai, if I have one plea, it is this: do not deny a major reason for SAPās success in the past and present, and do not close the doors on your customers in the future. Thank you.
]]>Originally uploaded by qmacro.
SAP found success with R/2 and R/3 for many reasons, one of which was abstraction. Because they wanted to offer a set of uniform facilities across a range of vastly different platforms, they built abstractions for basic services such as jobs (background processing), spooling, process and session management, database access, and so on. These abstractions, these inventions of layers on top of OS-level services contributed a great deal to the success of SAP implementations.
However, this culture of abstraction, combined with decades of being the original and best wall-to-wall ERP software solution, is causing problems since SAP started the long process of shaking off its monolith mantle and starting to compete and coexist more with the rest of the software world, and the internet (most particularly the web). SAP are used to designing, building and delivering things on their own terms, according to their own culture, and based on their view of the rest of the world. The problem is, the tech world in which SAP deliver software and services today is vastly different to what it was ten or twenty years ago, and SAPās size is making it difficult for them to adapt quickly enough.
Letās take the web as an example (which clearly didnāt exist then, but has for a good while now). And within the web example, letās take a bread and butter service ā OSS Notes, within the larger context of the service portal. Essentially, an OSS note is a document that describes a particular issue, typically with Symptom, Cause, Solution and other sections (including links to software corrections). Pretty straightforward. An OSS note has versions, a status, belongs to an application area, and has other data associated with it, but can be essentially represented as a web page.
The power of the web is vast, but how that power is presented is subtle. Hyperlinks, addressing (URLs), reliable navigation, and so on. And at the user end, we have the UI (the browser) that contains basic but important tools such as bookmarking, browsing history, and simple features like showing what the address of a hyperlink is when you hover over it.
But what SAP has done in implementing OSS notes on the web (they were previously only available on an SAP system that you had to connect to with SAPGUI) shows all the signs of the abstraction (re-invention) culture, and the struggle SAP still has in embracing the rest of the world.
First of all, the OSS notes are available from within the service portal, which is beset with all manner of navigation difficulties and breaks many of the cardinal rules of web design (frames, popups and new windows, impossibly long URLs, overuse of Javascript, pages that donāt fit in your browser even at 1024Ć768, but I digress ā¦ ) ā in fact the most telling symptom of the portalās problems is the fact that SAP never refer to specific URLs for things in the portal, you always receive instructions such as
āGo to this base URL, then click here, then here, then here, then here to get what youāre looking forā and of course invariably the texts and hyperlinks that you click through one month have been changed by the next month and this sort of navigation description breaks down entirely (ever tried to find a specific version of, say, SAPINST?)
But the main problem is that the OSS notes are only on the web in the letter of the law. In the spirit of the law, theyāre not. The frameset-induced misery means that you canāt use basic browser tools to bookmark and otherwise organise OSS notes the way you want to. But itās even more interesting than that ā on the top frame of each OSS note page, there are Javascript powered āfavouritesā and āsubscribeā links. Why canāt I just use the power of the web ā URLs, to manage my own favourites, either in my browser, or using external tools?
Furthermore, even if you overlook the problems caused by overengineering ā this abstraction layer of web upon web ā you canāt escape the fact that the machine-translation of OSS note content into HTML is beset with problems. Formatting issues mean that you soon lose your way in a long OSS note when it has nested bullet points. Also, none of the things referred to, which are available somewhere in the SAP portal, are hyperlinked (in fact, nothing is hyperlinked).
Finally, there is actually a unique URL for each OSS note but each one is extremely long, bears no relation to the OSS note number, and isnāt easy to exchange in, say, an IM/IRC or email-based chat.
You might think Iām particularly picking on OSS notes. Iām not; itās just that itās a tangible (and in-your-face) example of how things can go wrong when the culture of abstraction and the oil tanker-like momentum cause SAP programmers to over-engineer a solution. (And itās been on my mind recently too). There are plenty of other examples where SAP is unnecessarily re-inventing stuff ā take SAPtutor for example ā there are plenty of platform-independent ways of presenting slides, video and audio on the web. Why, then, invent yet another format that needs a special player, thatās only available on a single platform? But I digress (again) ā¦
So, to the title of this post, then. Where is SAP going? Theyāve made good progress in opening up to the world (albeit with a number of wrong turns, in the past and more recently), but thereās a lot to do. I know thereās a lot to do, as Iāve seen it first hand while performing some Basis activities recently, and having to use the service portal to get to where we need to be.
Can SAP adapt? Can they start to embrace, rather than resist, the environments in which they find themselves today? Can they tune their complex culture (of complexity) to deliver a better service and better software?
I have faith in them. But sometimes, when youāre making a living as a SAP-hacking footsoldier, itās hard.
Update:
Shortly after posting this, Piers wrote an interesting followup on how the closed culture may end in doom. Also, I spotted a well-written post by Ryan Tomayko called Motherhood and Apple Pie today which spookily touches on the core point of this post ā that SAP are resisting the very tools and technologies and design axioms that make the most scalable and widely distributed meta-application tick.
]]>For the impatient, thereās the screencast, and the ossnotefix.user.js script.
]]>One of the questions was āWhy do you write two blogs?ā. I wrote a longish reply, and thought it was worth putting up here (mostly because he didnāt use any of it ;-)
My answer:
Well, there are many reasons. Here are the main ones: *
(a) History
I was writing on my blog before SDN came along. SDN came along, and I
was invited to write some posts there. I did. I continue to post on my
blog while contributing to the collective SDN one too.
(b) Freedom
The web is a great leveller. Thereās no āus and themā anymore. And
with weblogs crystallising the essence of publishing at the individual
level, everyone benefits. Get a weblog, express your voice. Itās
your individual press. Iām a member of SDN, but like everyone else,
still a guest there. SDN is run by, hosted by, and funded by SAP. So
naturally I feel restricted in what I can, or should, say. There have
been occasions when what people have written on SDN has ruffled a few
feathers. Sometimes because what they wrote is ridiculous and negative,
and others because what they wrote flies in the face of where SAP is
going, in a technology context (I experienced the latter first hand).
Writing in my own blog means that I know that Iām not going to be
censored, or have my posts pulled. This isnāt by any means a criticism
of SAP or SDN. Itās just the way it is.
(c) Technology
When SDN first came along, it wasnāt properly on the web. It was an
island, blocked off by the requirement to register and log on with a
userid and password. Many people (including me) hassled the SDN team
into removing the registration and authentication restriction (at least
for people who just wanted to read stuff). And they did. Kudos to
those who made the wheels turn (and they know who they are).
Now itās time, in my opinion, for SDN to embrace community technologies
even more, and use the power of RSS (and / or Atom) to aggregate weblog
posts into one big āPlanet SAPā. Syndicate blogs from around the
web-o-sphere into a single place. Many communities do this to great
effect. If SDN doesnāt do it, Iām sure someone else will come along
and do it eventually. Embrace, donāt resist :-)
So if there was a āPlanet SAPā weblog aggregation mechanism, Iād only
have to write on one blog, my own, and the SAP stuff would appear in SDN.
Actually, I write on three blogs. The third is a shared āMr Angryā type
weblog where I rant. And rant. And rant.
As I was writing the bit about aggregation, I thought āWhy donāt I do that?ā. So I did. Using the excellent Planet Planet aggregation software, I put together Planet SAP.
Itās in the early stages, gathering posts from only a small number of feeds. If you have SAP related things to say, and a feed for it, give me a shout, and I can add it. Itās just an experiment right now ā¦ letās see how it goes.
]]>I guessed that āSAP-WUGā is a descendant of the venerable āSAP-R3-Lā mailing list hosted by MIT, and I was immediately whisked back years to when that was formed, and beyond.
Before SAP-R3-L there were two mailing lists; āsapr3-listā run by Bryan Thorp in Canada, and āmerlinā run by me, in the UK. We both formed our lists in the first half of 1995, and for a while didnāt know about each other (or each otherās list). Running a list ate a lot of resources, both in computing terms and in human terms ā I remember I was hacking on SAP at an oil company up in Aberdeen at the time, and after a dayās work would return to my hotel room and spend a couple of hours in ālist maintenanceā mode each night. It was pretty time consuming.
Eventually MIT approached us both and gave us the opportunity to merge the two lists, and have the new list, which would be called SAP-R3-L, hosted and run by MIT. We still would have administrivia tasks, which weād share and delegate, but it was a great offer (thanks MIT) and SAP-R3-L has left a great legacy.
Anyway, this year marks the tenth anniversary of sapr3-list, merlin, and SAP-R3-L.
The nearest thing I could find to commemorate was this bottle of bubbly, handed out to people in Walldorf (I was working there at the time) at a party in the car park.
So, happy anniversary, SAP mailing lists one and all!
]]>As an ex MVS chap (I managed VSAM (DL/1) based SAP R/2 systems on IMS DB/DC at the start of my career) I was amazed some months back to find Hercules, the open source S/370 emulator.
So imagine my delight when I revisited Hercules the other day, to find a chap called Volker Bandke had put together an MVS 3.8J Turnkey system that you can install and run on your emulated mainframe.
Using the ISPF-alike RPF, and QUEUE, a facility similar to SDSF, I am in oldtimer-heaven.
And of course, the first thing I tried (after a bit of jiggery pokery setting things up) had to be the inevitable:
//HWORLD JOB CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1),REGION=256K
//STEP1 EXEC PGM=IEBGENER
//SYSUT1 DD * HELLO WORLD!
//SYSUT2 DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSIN DD DUMMY
Welcome back JCL, my long lost friend.
Update
I found some great pictures of real vintage terminals connected up to contemporary emulators (of vintage hardware) at Corestore Collection. This is what the siteās owner calls ātechnological hooliganismā :-) Seeing this picture of a 3278 takes me back ā I spent a good part of the start of my IT career in front of one of theseā¦
]]>He goes on to point out the distance between that, and what I presume he thinks of as āgrown upā ERP software.
How do we interpret what he said? Was it a temporary slip? Or did he really mean it? Either way, itās worrying. Perhaps itās the Java lobby and their puzzling stance on (not) making Java open source ā is he trying to protect SAPās investment in the new COBOL?
Perhaps itās a momentary loss of touch with reality; to bring yourself back, Shai, ask yourself this (especially bearing in mind SAPās attempt to move closer to open standards and āWeb Servicesā) ā what do you think 90% of the worldās largest scale web services are written in, and run on? Yes ā open source and free software!
SAPās dominance of the business software market combined with the sheer size and momentum of the company and its developers sometimes make it hard for those inside to see the reality outside. So I can understand why statements like those of Agassi and Kagermann are made.
Nevertheless, it makes me sad to think that theyāre perhaps forgetting the enormous cooking pot, the catalyst, that is the ABAP language and the business applications that have been delivered, in an open source fashion (the source is available to see and modify), to customers for the past decade or so. Both SAP developers and SAP customers have benefitted from this cooking pot; the former due to Linusās law (āgiven enough eyeballs, all bugs are shallowā), and the latter due to the fact that customers can learn from, build upon, and fix code delivered from SAP.
Anyway, letās see where this debate leads. SAPās stance on open source is one thing; the stance on IP and software patents in Europe (read to the end of the article to find out what Iām referring to) is something else entirely more worrying. Come on chaps, do we really want a patent system thatās as ridiculously messed up as the one in the U.S.?
Update:Frank K looks at the quote differently, and also mentions SAPās involvement with Zend. Of course, this is one of many initiatives (MaxDB and contributions by SAPās LinuxLab to the GNU/Linux kernel to name a couple of others) that SAP are undertaking. Donāt get me wrong ā the reason why I was so shocked is that it was such a left-of-centre stance all of a sudden. Iāve defended SAPās open source initiatives in the past, and Iāll do it again.
In all, Iām still quite confounded by the implication that open source is for students and not for āseriousā software. In our SAP landscape, we have major SAP-powered applications that are written in Perl (with Apache, running on Linux). Perhaps itās me. I dunno. Time for a beer. Cheers!
]]>The other day I decided to stop going on about how painful using OSS notes on the web was, and do something about it. So I hacked up a Greasemonkey script, OssNoteFix
, that addresses the three main issues I have:
Greasemonkey, to quote Mark Pilgrim in his very useful "Dive Into Greasemonkey" online book, "is a Firefox extension that allows you to write scripts that alter the web pages you visit. You can use it to make a web site more readable or more usable. You can fix rendering bugs that the site owner canāt be bothered to fix themselves". The extension doesnāt do anything to web pages by itself, itās the scripts that manipulate the pages once theyāre loaded into the browser. (And yes, itās for Firefox, a modern, standards-compliant browser. If youāre still using Internet Explorer, shame on you.)
But before we get to the script, letās lay a bit of groundwork that will help smooth things along. Visit Dagfinnās weblog post Easily access SAP notes from Firefox and follow his instructions to set up SSO access to service.sap.com, and to create a bookmark with a custom keyword so you can access OSS notes very simply. The SSO access avoids all those tiresome HTTP authentication popups your browser throws at you each time the front-end machine serving your request changes due to load balancing. The custom keyword bookmark allows you to request OSS notes directly by typing something like this into your address bar:
note 19466
Once youāve got these set up, itās time to install Greasemonkey. Visit the Greasemonkey homepage and follow the link to install it (you might have to add the Greasemonkey site to the list of sites allowed to install software). Youāll have to restart Firefox to have this extension take effect.
Now itās time to install the Greasemonkey script that I wrote, OssNoteFix
. Go to http://www.pipetree.com/~dj/2005/05/OssNoteFix/ossnotefix.user.js
(no longer available, see below instead). Because of the ending (.user.js), Greasemonkey recognises it and gives you the option of installing it: Tools->Install User Script.
Once youāve got it installed, visit an OSS note page:
note 19466
and notice that, once itās loaded:
Hurrah!
I put together a screencast which demonstrates the creation of the OSS note bookmark, a visit to an OSS note page before OssNoteFix, the installation of the OssNoteFix user script, and the visit to an OSS note page after the installation. Iād already set up the SSO before I started recording, as that would have taken too long (and would be too boring to watch!) (Top tip: the screencast is at 800Ć600, so hit F11 to get fullscreen mode in your browser. Also, itās a 3 Meg file, so please be patient while it comes down the pipe!).
Of course, the usual caveats apply ā itās a beta, SAPās service portal pages are horribly complex and any change may break the script, and your own mileage may vary, blah blah blah. Also, the script sometimes matches 5 or 6 digit numbers that arenāt OSS notes. But it works for me. It was especially useful this week as I was installing a CRM 4.0 system.
This script is free and open source software, use it as you see fit, and if youāre not happy, you can get your money back š
// OssNoteFix
// version 0.1 BETA!
// 2005-05-18
// Copyright (c) 2005, DJ Adams
// OssNoteFix
//
// ==UserScript==
// @name OssNoteFix
// @namespace http://www.pipetree.com/qmacro
// @description Make OSS note pages more useable
// @include https://*.sap-ag.de/*
// ==/UserScript==
//
// --------------------------------------------------------------------
//
var textnodes, node, s, newNode, fnote;
// This is the URL to invoke an OSS note. Ugly, eh?
var linkurl = "<a href='https://service.sap.com/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=$1'>$1</a>";
// Right now, an OSS note number is 5 or 6 consecutive digits,
// between two word boundaries. Should be good enough for now.
var ossmatch = /\b(\d{5,6})\b/g;
// Act upon the 'main' framed document which has a form 'FNOTE'
// and the title 'SAP Note'.
if ((fnote = document.FNOTE) && document.title.match('SAP Note')) {
// Get stuffed, evil frames!
if (top.document.location != document.location) {
top.document.location = document.location;
}
// Make a useful document title from the OSS note number,
// found in the FNOTE form's _NNUM input field, and the
// OSS note title (which is in the first H1 element.
var h1 = document.getElementsByTagName('h1')[0];
var heading = h1.firstChild.data;
heading = heading.replace(/^\s*(.+?)\s*$/, "$1");
document.title = fnote._NNUM.value + " - " + heading;
// Make the plain text references to OSS notes into a href links
// pointing to their home in http://service.sap.com
textnodes = document.evaluate(
"//http://text()",
document,
null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
null);
for (var i = 0; i < textnodes.snapshotLength; i++) {
node = textnodes.snapshotItem(i);
s = node.data;
// Got a match? Make it into a link
if (s.match(ossmatch)) {
newNode = document.createElement('div');
newNode.innerHTML = s.replace(ossmatch, linkurl);
node.parentNode.replaceChild(newNode, node)
}
}
}
]]>
To provide balance for all the good press itās getting, I thought Iād share my experiences. And theyāre all bad, unfortunately.
Downloaded the latest release (Hoary Hedgehog) as a live CD image. Burnt to CD. Booted my Dell Latitude X200 laptop with it. No video (apart from the initial 80Ć25 setup screens). Even trying the VGA=771 had limited but ultimately intermittent success. On the few occasions when I did actually get a Gnome screen, it would only work in 640Ć480. (Knoppix, and other live CDs work fine on this laptop).
No bother, I thought, Iāll go for an install CD this time, and try it out on my trusty but recently retired old server in the basement. Itās as bog standard as you could get, and has had all manner of Linux distributions running fine on it (RedHat, Fedora, Slackware, and so on).
Four attempts at an install later, and no success in sight. Donāt ask me what the problems were, because I was so annoyed I erased them from my memory as I chucked the newly burnt CD in the bin. I was planning to put Ubuntu, with the nice Gnome interface, on to that old server, and use it to replace my mumās ageing W98 machine. But no joy.
A shame. I really wanted Ubuntu to work. Iām sure a hell of a lot of work went into putting Ubuntu together. And Iām sure it must be me thatās doing something wrong. But Iāve installed / booted my fair share of OSes in the past (itās a long-term hobby with me) so I wonder what it could be. Iām after the ease of Knoppix, with the slickness and completeness of Gnome. Hmm, perhaps I should have another look at Gnoppix, but it seems that theyāre based on Ubuntu (now?) as well.
]]>But despite this interest, and despite the great efforts of those behind the scenes (thanks Karl and Mark) to get the behemoth to provide blanket Internet access for the conference location, there was something, well, lacking. Itās something that some people have noticed and talked about before now, something thatās particularly European (or rather, non-U.S.). Whereas at U.S.-based events (for example, OāReillyās OSCON, or more pertinently, the SDN Meets Labs in Palo Alto) thereās a parallel conversation, a parallel conference going on in the ether, via IM, IRC, and weblogging, thereās a noticeable silence at some events in Europe. For example, at the Palo Alto event, there was active participation from people not actually there (with a lot of help from the webcasts), and plenty of conversation in the #sdnmeetslabs backchannel. In Walldorf this week, you could see the digital tumbleweed roll by in there ā partly due to the fact that the Internet connectivity didnāt extend to the actual session rooms.
I think itās partly the environment (Internet availability at SAP events have been poor to non-existent in the past), but itās also culture. Matthew has talked about before ā conferences are two-way, not one-way. In other words, events are read/write, not read-only. The culture in Europe needs to change. Change from within the corporate mind, and from within the minds of event attendees. I think it is changing. And the more companies realise the benefit of two-way interaction at technical events, the quicker the change will happen.
Roll on Euro-OSCON!
]]>Iām going to have to think of ways to make use of the extra bandwidth and picture storage capabilities, arenāt I?
This comes in handy right now as I was talking to Mark yesterday at the SDN Meets Labs and he was trying to figure out how to get a whole load of pictures, identified by one or more tags, into a pool, all at once. It seemed from the UI that you couldnāt do it directly; looking however at the API a little script making use of a combination of flickr.photos.search and flickr.groups.pools.add might do the trick.
]]>The event is just getting underway, and thankfully, this time, thereās access to the interweb available to one and all.
Right now Klaus Kreplin, an SDN bigwig, is talking about ROI and NetWeaver, and the reality of IT. Itās a similar presentation, using some of the same slides, as the one that Shai Agassi gave at TechEd last year in Munich. So I donāt have to concentrate too much right now.
But a quick glance at the sessions that are coming up, Iāll have to get my brain in gear. Actually, another glance around the auditorium just now and itās full to overflowing. Excellent. Lots of chat and geek-exchanges to come, I hope.
Anyway, Iāve added some photos of the opening day (pictures from registration) to the SAP Developer Network group at Flickr.
By the way, Iām sure that one of the chaps sitting over to my right is Frank Koehntopp ā Frank ā is that you? Perhaps Iāll ask him via the IRC backchannel for this event ā which is the #sdnmeetslabs channel on irc.freenode.net. See you there!
]]>Originally uploaded by qmacro.
They were on their way up from London to the Shakespeare County Raceway for a few races and time trials. It was great to see cars like this, and even better to hear them start up as they left. I have a fondness for old (ā60s and ā70s) American cars, having had a lovely ā73 Chevrolet Caprice for many years.
That reminds me, mallum has a great looking Mustang and Iām really jealous :-)
]]>He goes on:
Now suppose the implementation of great_circle_distance was a web service. It could be a straight-forward REST web service or it could be some sort of RPC or it could be something else. As a programmer writing WITW, I donāt care! What has to happen is, I declare the function and then I use it. A little boilerplate is OK, but making me understand URIs or GET or POST or XML isnāt. [Emphasis mine]
*
Allow me to paraphrase taking the world of SQL as an example: āā¦but making me understand which tables are which, and the difference between SELECT and UPDATE ā¦ isnātā.
Surely thatās going a little too far in the other direction?
Later:
Sam Ruby, linking to this post, ponders object relational mapping frameworks and simplicity, while reminiscing on concise cylinder and head placement of data in the days of yore.
(I too remember calculating optimum cylinder and track positions on separate spindles for the data in the (then) DL/1-based SAP R/2 databases I worked with, while scoffing at the new kid on the block, DB2, with its newfangled ārelationalā model, which was obviously not going to lastā¦)
Anyway, his question and statement
What is simplicity? We all think we know what it is.
*
succinctly puts the finger on one of the real reasons for a lot of the debates that wax and wane as technologies and ideas come and go.
]]>Iād been meaning to get around to making that easier when I saw Erik Benson point at FlickReplacr, a cool bookmarklet toy. I had a look at the Javascript inside, and on seeing this bit,
var g=window.getSelection();
I realised that it was exactly what I could use to make my postcode lookups smoother.
So herewith a little bookmarklet: [Postcode](javascript:location.href='http://uk.multimap.com/map/browse.cgi?pc='+encodeURIComponent(window.getSelection())). As with other bookmarklets, just drag this link to your toolbar; then whenever you see a postcode on a page, select it with the mouse, and click the bookmarklet.
]]>So if youāre subscribing via the old RSS feed URLs (//qmacro.org/about/xml or http://www.pipetree.com/~dj/qmacro.rss10) please change to /index.rdf.
Thank you. And sorry for any inconvenience and confusion.
]]>I had a nice conversation with Ralph Meijer this afternoon; he had grabbed a very old program that Iād written ā sjabber, a console-based Jabber groupchat client ā because heād been having some issues with his current client.
As Ralph explained in his blog just now, it only took a single-line modification to get it up and running with the newer mu-conference protocol. And if you look at the line:
Type => 'headline', # why did I do this?
it was fairly questionable, even to me, from the start ;-) (The line was commented out.) Clearly it should have been āgroupchatā from the beginning. Early daysā¦
BTW, Ralphās speaking this weekend at FOSDEM, in Brussels. Irritatingly Iām otherwise engaged on the Saturday and canāt make it then; but Iām hoping to be able to pop down in the car on the Sunday and spend the day there. Itās a great event.
]]>
- Simplicity: julie should be easy to use when youāre drunk. *
How ā¦ refreshing.
]]>As a MT-novice I looked around and considered how I might best bring those Blosxom based post into MT. I mused upon a script revolving around MT::Entry, then one that used XML-RPC, before discovering the import feature from reading the documentation to mtsend.py. A simple file format to mass-load into MT. Perfect.
So I hacked up MakeMtImportEntries.py, a script that takes a filename (of a Blosxom .txt entry) and produces the blog post in the import format consumable by MT.
This is the sort of format Iām talking about:
TITLE: The Blog Post Title ALLOW COMMENTS: 0 ALLOW PINGS: 1 CONVERT BREAKS: 0 DATE: 04/27/2002 08:57:14 ----- BODY: The blog post body ...
You can therefore use this script in a find loop, like this:
find ./blog/ -name '*.txt' -exec ./MakeMtImportEntries.py > import.dat
and then move import.dat to where the MT import function expects it. Pretty straighforward.
(Iām not interested right now in re-creating the categories in MT from the hierarchical categories in Blosxom, so hacking the script to include category-specific āheadersā in the template is left as an exercise for the reader.)
Share and enjoy.
]]>A couple of weeks ago, Piers and I noticed some odd system messages on gnu.pipetree.comās console. It looked like we might be under attack. Following some oddness all round, including the network interface not coming up after a reboot, we decided that the best thing to do was a fresh install of everything. So after a quick dash out at lunchtime to buy a new HDD and install the OS and applications, we went down to see the friendly and helpful folks at Mailbox and performed surgery on the patient.
Now back up and running, weāre slowly reconfiguring stuff. A move to using Moveable Type as a shrinkwrapped piece of commodity software seemed like a good idea. I like tinkering as much as the next person, but I think weāve reached a level where certain applications should ājust workā ā after all, I donāt worry how Apache works inside (much), why should I spend time hacking on blogging software? I should just write. Moreover, fewer Internet cafes offer anything more than web access, so I need a web-based front end to things more and more (Iām still holding on to Mutt for now, though).
Anyway, Robert had recently started using MT, so I thought Iād give it a go. I tried WordPress too. WordPress was a breeze to install and get going with ā 10 minutes all told. But it didnāt seem to have āproperā RSS (1.0) [Later: I found Morten talking about how to turn RSS 1.0 on], and I couldnāt immediately use it with some other software and applications I had in mind. MT was more of a pain. I just about managed it in an hour, but that involved looking at Perl code, tailing server logs, and lots of head scratching. Not very impressive. But now itās running, Iām happy.
]]>Since I had my small Linux server with me (that I used in my session this morning to demo some ICF stuff), we decided to blast away the previous NW4 install and have a mini installfest!
Excellent. Thereās something about interesting things happening on computer screens that seem to attract the inner-geek in people ā¦ within minutes we had a small crowd of people joining Piers, Mark, Gregor and me to watch the poor little server get hammered as the RPMs were installed from the DVD.
Iām typing this post while the install goes on ā here you can see a screenshot of the progress.
I noticed straight away that the install.sh script supplied on the DVD crashed and burned immediately. I had a little look (open source rules again š) and found it was because the script was trying to execute a KDE program ākdialogā to display the licence and prompt for acceptance of the terms. (KDE is a desktop manager). I donāt have KDE installed on the server, so it was almost a non-starter.
Luckily I had a flash of inspiration, and created a symbolic link from a non-existent ākdialogā to the ever-present X client utility āxmessageā (I avoided copy-and-editing the script from the DVD as I would have had to change loads of relative pathnames and so on to get it to work from a new location). I reinvoked the install.sh script ā¦ and everything started perfectly. Hurrah! (If you look closely at the screenshot you can see evidence of this little hack.)
Anyway, 4 RPMs have been installed by now ā itās time for me to go back and have a look.
Ok, after less than two hours, my new NW4 system, service release one, is installed and up and running:
Nice work, LinuxLab folks!
]]>Thereās a group photo pool on Flickr for SAP TechEd this year; Iāve just uploaded a load of photos (of questionable quality ā sorry, I only have my cameraphone). Iām sure youāll see how hard weāre all working here. Ahem.
]]>IF_HTTP_EXTENSION
, which is what every ICF handler must implement (in the form of a single method HANDLE_REQUEST
) has a couple of attributes, FLOW_RC
and LIFETIME_RC
. FLOW_RC
is for controlling the flow of handler dispatching for a request. LIFETIME_RC
is for controlling the lifetime of handlers for a sequence of requests. To quote the documentation at help.sap.com on the latter:
HTTP request handlers can control the lifetime of their instances if they are operating in stateful mode ā¦ If the attribute
IF_HTTP_EXTENSION~LIFETIME_RC
is set to one of the following values, the HTTP request handler can specify whether the handler should be reinitiated for every request in a session, or whether the handler should be retained and reused for subsequent HTTP requests.
The default action is for the handler instance created to handle the request to be kept, so that instance-level data is retained (think of an incrementing counter value that keeps going up every new request). This is the equivalent of setting LIFETIME_RC
to the value of the constant CO_LIFETIME_KEEP
. But if LIFETIME_RC
is set to the value of constant CO_LIFETIME_DESTROY
:
The current instance of the HTTP request handler is terminated after the request is processed. If stateful mode is active, a new instance of the HTTP request handler is created. This means that local data belonging to the instance is lost.
(This of course only makes sense in the context of stateful sessions, which you can create using the SET_SESSION_STATEFUL
method (of IF_HTTP_SERVER
) ā one effect of which causes a context id cookie to be constructed and set in the next HTTP response.)
Ok, so with the phrasing of the help text (such as āā¦can control the lifetimeā¦ā) and the implication of the āDESTROYā part of the constant name, I did a little experiment to try and control the lifetime, by setting the LIFETIME_RC
attribute so that the handler instance would be destroyed after it exited. Did it work as expected?
No.
Hmm. Whatās going on? Well, it seems that with LIFETIME_RC
, itās either all or nothing. If you set your session to be stateful and specify that the handler instance should be kept (or let it default to that anyway), then you canāt, later in the session, suddenly decide to have the session destroyed.
Looking under the hood, we see this is confirmed in the ICF layerās code. The whole process of handling a request is triggered via PBO modules in SAPMHTTP
, and via the HTTP_DISPATCH_REQUEST
coordinator, we come to the EXECUTE_REQUEST
(or EXECUTE_REQUEST_FROM_MEMORY
which Iāve seen in 6.40) method of the CL_HTTP_SERVER
class.
When a request comes in, the appropriate handler is instantiated, and the HANDLE_REQUEST
method called. Once this method returns, a decision based on LIFETIME_RC
is made as to whether to save the instantiated handler object in an internal table, ready for a new request. Unless LIFETIME_RC
is set to destroy, the object is saved, providing weāre dealing with a stateful session:
if server->stateful = 1 and extension->lifetime_rc = if_http_extension=>co_lifetime_keep and ext_inst_idx = -1. * add extension to list of instantiated extensions ...
Thereās no facility for removing existing table entries though. And this is the key to understanding why manipulating the LIFETIME_RC
attribute wonāt always do ā¦ what you think it should do.
I bet youāre glad you know that now ā¦ share and enjoy š
]]>After registration, we went along to Shai Agassiās keynote presentation. It was fairly interesting, but overall, there was a single key point that stayed with me: āunificationā is the new āintegrationā.
Shai talked about cycles in the IT industry. He used the airline check-in process as an example to talk about how processes are invented, integrated, and eventually commoditised. He pointed out the fact in the past, check-in used to be handled by people. Big queues, long delays. Now we have self-service check-in stations, where you just stick in your credit or frequent flyer card and are checked in in an instant. Big attraction. The next big thing will be airlines offering you a check-in process ā¦ performed by a real human being! A circle completed.
Last week I read a blog entry talking about XML and the transport of binary data. Someone mentioned to me that XML was fairly inappropriate, inefficient even, to transport data that is more suited to a binary representation, and perhaps binary protocols are the future. Now if that isnāt a complete circle being formed I donāt know what is š
And this is where we come to āunified, not integratedā (my phrase). Recently I pondered the potential irony of SAPās technology directions, with particular reference to data integration. Basically it seems to me that SAP is moving away from integration as a focus (I used the word āde-integrationā to describe what I meant), with all the different parts of the NetWeaver family performing different functions, and data living in and travelling between different systems. (This is in stark contrast to the opposite effect on the client side, where all data and functions seem to be converging into one homogenised front-end).
Anyway, this morning during the keynote, with the irony of integration still in my thoughts, I settled on an explanation of what might be happening. And the key to what is happening is the word āunifiedā. Unification of data and processes is close to integration of data and processes, but itās not the same thing. And (unless I got the wrong end of the stick) it seems that platform and data unification is what SAP is driving at right now. So Iām now trying to change the design of the puzzle ā where I try to figure out what direction SAP is going with technology ā from an āintegrationā-based one to a āunifiedā-based one.
And cycles? Well, Iām just wondering how long it will be that we complete the circle and data and function integration and consolidation is all the rage. Again.
Of course, there was a lot of other stuff that went on at the keynote too. Here are a couple of pointers:
Shai gave a lot of time to telling us about how composite and xApps will help us in being more flexible in business. Iām not doubting this, but I personally am still struggling to understand what they are (technically) and how they tick. I went to the xApps booth at TechEd last year in Basel, and quizzed the patient folk there trying to understand what we are dealing with. But I failed to āget itā. I suspect, based on what other people have said to me on this subject, that Iām not alone. So thatās perhaps why Shai gave the subject so much airtime this year. Weāll see, Iām definitely going to re-visit the xApps booth this year and have another go š
Shai invited Harald Kuck up on stage to give a fantastic demonstration of how SAP hackers in Walldorf have enveloped the Java VM with the same virtual machine / process management goo that weāve grown to know and love in the ABAP world (it works so well there that we donāt even notice it working). This is what SAP excels at ā having the inspiration and guts to go for really hard problems ā¦ and solve them. Hats off to those people (just a shame the language in question is Java š
I am lucky enough to attend a number of technical conferences each year. SAP TechEd is certainly the most well-attended, orientated around the biggest software entity in the world, and I donāt need to tell you how important the ānet is to ERP business these days.
So youād think that providing some sort of Internet access would be as natural and obvious as providing food and water. Wouldnāt you? Well, wrong. Iām having deja-vu all over again, as the saying goes. In Basel, no ānet access, and the sessions are so full youāre refused entry. Pretty disappointing. I decided to give TechEd another chance this year. Perhaps itās too early to say for sure, but I think that it was possibly a bad move. No ānet access (apart from access that you can buy on an hourly basis from the convention centre itself ā¦ at extortionate prices) at all, apart from in the speaker room (and itās not proper ānet access ā just access to the Web via a proxy, so I canāt reach my email on my box, via ssh, for example). And the sessions weāve wanted to attend so far ā¦ yep, have been too full to get into. So a bit like Basel. But with even less power (for laptops). āDisappointingā is the word that comes to mind. I attend the grass-roots event FOSDEM (the Free and Open-Source Developersā Meeting) in Brussels, and even they can organise free wifi access. And the attendance fee is ā¦ zero! Whatās going on, SAP?
(Indeed, as you can see here, Craig Cmehil is so desperate heās had to resort to paper and pencil to write his blog post!)
Iād like to end this ramble on a positive note, though. Our great leader Mark Finnern is running around organising a few of these extortionate access cards for the SDN clubhouse (which is also wireless-less and powerless) plus some power outlets for us. Nice one Mark, and thanks! Weāll see how it goes.
Update: Mark has organised power for the SDN clubhouse ā thanks Mark!
]]>This year, itās going to be different. Thereās a wiki, there will be Birds Of a Feather sessions, including a SapAndOpenSourceBof run by me and my good friend Piers. Wifi and ānet access has even been promised too. (Although when I compare the bullet points on the Munich and San Diego pages, thereās a distinct difference ā no wireless at Munich?)
But the biggest change this year for me is that Iāll be speaking. Iām giving a one hour session:
The Internet Communication Framework: Into Context and Into Action!
Business Server Pages (BSP) technology is a great way to put together ABAP powered web-based applications. But thatās not the only way; in the grander scheme of things, BSP technology is ājustā a layer that sits on top of the Internet Communication Framework (ICF), the Web Application Serverās core foundation that provides a full set of object-orientated APIs for handling HTTP requests and responses. This talk will put the ICF not only into context ā what it is, how it works, why itās important ā but also into action, with a live demonstration where we build, debug and run a simple web-based service. If youāre interested in looking under the hood at the engine that connects the Internet Communication Manager with the ABAP Personality world, and learning how to use it yourself, then this talk is for you!
Iām really excited at the chance to ramble and rant about some great parts of the Web Application Server; in many ways, the ICF is a bridge between the traditional walled world of SAP and the world of open standards. And this particular bridge is constructed with blocks that have āHTTPā stamped through them.
]]>This year, itās going to be different. Thereās a wiki, there will be Birds Of a Feather sessions, including a SapAndOpenSourceBof run by me and my good friend Piers. Wifi and ānet access has even been promised too. (Although when I compare the bullet points on the Munich and San Diego pages, thereās a distinct difference ā no wireless at Munich?)
But the biggest change this year for me is that Iāll be speaking. Iām giving a one hour session:
**
The Internet Communication Framework: Into Context and Into
Action!
*Business Server Pages (BSP) technology is a great way to put together
ABAP powered web-based applications. But thatās not the only way; in
the grander scheme of things, BSP technology is ājustā a layer that
sits on top of the Internet Communication Framework (ICF), the Web
Application Serverās core foundation that provides a full set of
object-orientated APIs for handling HTTP requests and responses.
This talk will put the ICF not only into context ā what it is, how
it works, why itās important ā but also into action, with a live
demonstration where we build, debug and run a simple web-based service.
If youāre interested in looking under the hood at the engine that
connects the Internet Communication Manager with the ABAP Personality
world, and learning how to use it yourself, then this talk is for you! *
**
Iām really excited at the chance to ramble and rant about some great parts of the Web Application Server; in many ways, the ICF is a bridge between the traditional walled world of SAP and the world of open standards. And this particular bridge is constructed with blocks that have āHTTPā stamped through them.
]]>From integration to de-integration
But while attending a (rather poor, I have to say) training course at SAP UK last week, it finally struck me. SAP have been selling enterprise level software for a very long time. And one of the key selling points was that the data, and the business processes, were integrated. Indeed, for a good while, SAPās slogan (at least here in the UK) was āIntegrated Software. Worldwideā.
But funnily enough, that slogan disappeared, in favour of another, that didnāt focus on integration. I canāt remember what it was (it was certainly less memorable), and now itās changed again (to āThe best-run businesses run SAPā). Anyway, back to integration. The dream presented by SAP in the 80s and 90s showed companies that they could escape the headaches of separate systems and integrate their data and processes into one single system (R/2 and later R/3). This was indeed the reality too.
But whatās happening today? Every way you turn, there are SAP systems doing different things, managing different processes, and storing different data (sometimes sharing it with other SAP systems). Customer Relationship Management (CRM) systems handling sales-related activities; Supplier Relationship Management (SRM) systems handling supplier activities; thereās the IPC system for pricing and configuration, and the APO system for planning and optimisation activities. And data is moved to a business warehouse for reporting purposes. And so it goes on.
Data and process de-integration, anyone?
I donāt know whether the term should be ādeintegrationā or ādisintegration'; all I know is that it seems a different road that SAP is travelling down than they did before. On the course I attended last week, the reality of managing data between different SAP systems in one installation was rather worrying. Just as it was 20 years ago. Or so it seems. And right now, at least with CRM and BI, there doesnāt seem to be a set of uniform data exchange tools for managing the exchange ā for example, while Bdocs are used to manage master data between a CRM system and an R/3 (ālegacyā :-) system, theyāre not used for the same purpose between a CRM system and a BI system. Perhaps I havenāt drunk enough kool-aid yet.
A different rule for the client side
So what was the purpose of this post? It wasnāt directly to point out the about-turn SAP seem to be making in this area. It was actually to point out the juxtaposition that SAPās new de-integrated direction has with ā¦ their vision for front-ends. While de-integration is where itās at on the server side, we have total integration on the client side. Enterprise Portal (EP) 6.0, WebDynpro, PeopleCentric design (donāt get me started on that) ā every function that a user might need is lumped together in one homogenised āwebā client. Email, discussion groups, graphics, reports, transactions, IM, and so on. All on one page in your browser. What happened to ābest of breedā on the desktop? Iām a great believer in the right tools for each job. Thatās why I run a proper email client (for email and threaded forum-style discussions), a separate IM client, a separate newsreader, and a browser. Each one excels in its own domain. Trying to achieve everything in a browser window is doomed.
So if best of breed, focused application platforms is what SAP is aiming for at the server end, why go in the other direction at the client end? A single screen looking extremely busy with lots of little application windows, flashing lights, tables, graphics, and so on, is great for screenshots and brochures. But what about the real end user? Iām an end user as well as a developer, and can imagine productivity taking a huge dive if we were forced to use this.
Of course, the browser-based applications served from the EP are a lot different from what I imagine browser based applications to be. You know, ones that allow you to use your browser as, well, a browser, with old fashioned things like bookmarking, navigation, proper page titles, and so on. And ones that work in browsers, not just in a specific combination of Microsoft Windows and Internet Explorer ā Iām having a nigh-on impossible time getting in to the SAP Developer Network site right now, because of recent changes that cause the site not to āworkā at my end with Firefox and / or Epiphany on Linux.
But that (ābrowser abuseā, as also noted in more general terms by Joe Gregorio) is a story for another time ;-)
]]>The writing is simple. Straightforward. It reflects the exact, black and white reasoning of this autistic child. Sad and funny at the same time. And as I read each sentence, I feel that a lot of work has gone into every one of them. Exactly the right words, the right number, and the right punctuation. Itās almost as if the words on the page, at a level above the story, tell a story themselves. I think the choice of font, which annoyingly is not mentioned in the impressum at the front like fonts used to be (āPrinted in some-such-font by some-company in Bungay, Suffolkā), but is a very clean and light sans-serif one, adds to the clarity and directness of thought.
When you eat a bar of chocolate, you eat it chunk by chunk and thereās not much to think about. When you eat a truffle, or some delicate hand-made chocolate assortment, you eat it slowly, bit by bit, enjoying the flavours and appreciating the work thatās gone into making it. But sometimes you just shove it in your gob and itās gone. Iām trying desperately not to do the latter with this book. Itās too good for that.
]]>Itās a great mix of ideas, skills, and energy, where the talks are decided more or less spontaneously and written up on a series of whiteboards.
I gave a talk ā HelloSapWorld ā this morning which was intended to burst the bubble that SAP seems to find itself within, for a great majority of hackers outside the SAP universe.
Monolith, behemoth, huge-and-complicated, impenetrable, impossible. Those are all terms Iāve heard used by friends and colleagues with respect to getting started with SAP. Especially here. So I was pleasantly surprised to see a good turnout for the talk, where we crowded round the laptops (there was no projector in the room allocated to us, so we improvised by replicating the screen on the rest of the wifi-connected laptops in the room via VNC).
After a few slides, we got into the meat of the talk, which was a live hacking session where we created simple āHello Worldā style objects ā a report, a function module, a BSP page, an ICF handler, a Python RFC client, a Perl RFC server, and even a (one-dynpro) transaction. The time we had (an hour) simply flew by.
I have already had very positive feedback from the attendees ā¦ who knows, maybe weāll see more open source hackers entering the SAP world soon!
Update: Some pictures are available here: https://www.flickr.com/groups/eurofoo/
]]>The meeting kicked off at around 2pm in the āposhā 6th floor of SAPās EVZ building (I understand food and drink focused logistics were the reason for that ā nicely organised, Mark!) and lasted until sometime between 5pm and 6pm. Iām not sure exactly when as the time flew, and in any case, the coffee was so strong it made me go cross-eyed and I couldnāt have read the time if Iād tried.
We started with a huge round of introductions, where each person suggested one good thing and one bad thing about SDN. This was very revealing, as it showed clearly that different people have different perspectives on what SDN is and their relationship to it. But there was a lot of common ground.
As far as the good things went, well, the fact that SDN exists was pretty much up there at the top of the pile. Everyone was in agreement that a site like SDN, with weblogging, forum discussion and download facilities, as well as a growing collection of articles, was an extremely good thing (obviously!).
There were plenty of bad things that people put forward too. None that canāt be solved, I might add. I think itās fair to say that the overwhelming winner here was the fact that you have to register and sign in to get to the SDN content and use the facilities. This (as I and others have pointed out in the past) has caused SDN to exist as an island. Very few people outside of SDN link to SDN content (forum posts, weblog items, articles) from their own pages simply because their readers are not prepared to go through the hassle of registering and authenticating with what they see as a āwalled cityā. And the number of people who might discover and link to SDN content is lower than it should be for exactly the same reasons.
But ā get this ā the requirement to log on is going away in the near future. Hurrah!
Following the introductions, I inflicted a combination of ranting, rambling and arm waving on the room, in the form of a short talk on an outsiderās view of SDN. I wonāt repeat the content of the talk to you here, but as Iād put together a few slides (mostly to fool people into thinking I knew what I was doing) you can read them now here: An outsiderās view of SDN.
There was a good range of topics discussed. Here are some of the highlights (for me).
What SDN is, and consequently what content it can and should contain, was enthusiastically debated. I think itās fair to say that there were two general camps. In camp 1, there were people who regarded SDN as an extremely useful channel to deliver information on technology direct to developers. In camp 2, there were people who regarded SDN as an open community where everyone and their opinion were equal.
Weblogs and forums imply (to me) an open opportunity to talk about things, learning with and from your developer peers. This, coupled with the fact that a channel to deliver information seems (again, to me) to suggest traffic in mostly one direction and some sort of hierarchy in the relationship, puts me clearly in camp 2.
Everyone agreed that SDN was still in its infancy, and finding the right balance in this respect was (and is) an ongoing task, which is understandable in a āliving, breathingā environment.
Not running MS-Windows, let alone the dreaded Internet Explorer, puts me in the minority. A position I make up for by being vocal about web design and architecture that doesnāt work well in non-IE situations. Javascript, frames, impossibly long URLs, and other usual suspects were mentioned in the discussion. Fortunately I wasnāt alone with my usability woes. I guess with any big site there are learning steps; Iām just doing my bit to help by complaining (politely :-).
The fact that SDN remains largely a black box (or is that a black hole?) in the general web universe has largely to do with the authentication requirements Iāve already mentioned. As soon as those requirements go away, SDN can partake of the link love that other communities are blessed with. Moreover, mechanisms like trackback will allow people who donāt want to use SDN to write about something to nevertheless make the connection to SDN content in a useful and recpirocal way.
Raised mainly by the SAP people who submit articles and weblog entries to SDN, the consensus was that better facilities for managing content would be a bonus. The ability to revise content after submission is a good example of what people were asking for.
Thereās a new mechanism that Mark and the rest of the SDN team have been working on, with which contributors to SDN can earn points, that can be redeemed for ā¦ well, I canāt remember, to be perfectly honest. It was about that time I made the mistake of drinking more black coffee, which made my head spin and my eyes cross. But I do remember there was a lot of discussion, about how the points could or should be awarded.
Kathy Meyers gave a good talk on how to write well for the web (I hope sheās not reading this now with that in mind ā Iām sure Iāve broken lots of rules already!). On the subject of producing content, we touched on the question of when content should be in the form of an article, and when it should be in the form of a weblog. Basically, I think the (sensible) consensus was reached that it didnāt really matter that much, and one just used common sense to tell. Different people will have different perspectives, and thatās fine.
Oh yes, and before I forget ā some of the discussion was recorded, to be shown to the rest of the SDN team, who due to geography and other real world restrictions couldnāt be there. So donāt think that the meeting was an isolated affair; hopefully, all the points raised and discussed will find their way to the people who can act upon them.
After the meeting, Lutz, Mark, Matthias and I went into Wiesloch to the Alter Schlachthof for a few beers and something to eat. We had a great time talking about all sorts of things. It was all fine until I gave the language game away by talking to the waitress, a result of which Matthias forced us all to speak in German š
Later Mark tracked down Marc, he was in Heidelberg, and after finishing his drink there came down to meet us. He arrived with a plastic bag with (SAP)āTABUā on the outside and Absinthe on the inside. He ordered a blue drink, pointed out that it was actually green, and then drank it anyway, telling us stories involving VCs, a hotel called āWā, nightclubs in New York, and conferences in Hawaii. I think he was from outer space. But a great guy.
Anyway, that just about wraps it up. I need to get off this train and onto another one. It was indeed an honour to meet everyone yesterday ā thanks!
]]>Thereās lots of great discussion here. But Iāve got go as the discussion is reaching a stage where I simply have to interrupt!
]]>The winning talks look really good ā Iām looking forward to hearing them. Itās interesting that two of the three are BW related. Seems like a hot topic.
I wonder if TechEd will offer some birds of a feather (BOF) style facilities? If there are such facilities, perhaps some of the rest of us can get together informally during TechEd and inflict our talks on each other anyway š
]]>As an example of taking the RESTian approach to exposing your SAP data and functionality through services you can build with the excellent Internet Communication Framework (ICF) layer, I thought Iād show you how straightforward and natural data integration can be by using a spreadsheet as an example.
In my recent SDN article (published this week):
āReal Web Services with REST and ICFā (unfortunately lost and not archived)
ā¦ I presented a simple ICF handler example that allowed you to directly address various elements of CTS data (I prototyped it in my NW4 system so I thought Iād use data at hand, and build an example that you could try out too). For instance, you could retrieve the username of the person responsible for a transport by addressing precisely that data element like this:
http://shrdlu.local.net:8000/qmacro/transport/NW4K900007/as4text
The approach of making your SAP data and functionality first class web entities, by giving each element its own URL, has wide and far reaching benefits.
Take a programmable spreadsheet, for example. Youāre managing transports between systems by recording activity in a spreadsheet. Youāre mostly handling actual transport numbers, but have also to log onto SAP to pull out information about those transports. You think: āHmmm, wouldnāt it be useful if I could just specify the address of transport XYZās user in this cell here, and then the value would appear automatically?ā
Letās look at how this is done. My spreadsheet program of choice is the popular Gnumeric, available on Linux. If you use another brand, no problem ā thereās bound to be similarities enough for you to do the same as what follows. For background reading on extending Gnumeric with Python, you should take a look here.
With Gnumeric, you can extend the functions available by writing little methods in Python. Itās pretty straightforward. In my home directory, I have a subdirectory structure
.gnumeric/1.2.1-bonobo/plugins/myfuncs/
where I keep the Python files that hold my personal extended methods.
In there, in a file called my-funcs.py, I have a little script that defines a method func_get()
. This method takes a URL as an argument, and goes to fetch the value of what that URL represents. In other words, it performs an HTTP GET to retrieve the content. If successful, and if the value is appropriate (itās just an example here, Iām expecting a text/plain result), then itās returned ā¦ and the cell containing the call to that function is populated with the value.
Hereās the code.
# The libs needed for this example
import Gnumeric
import string
import urllib
from re import sub
# My version of FancyURLopener to provide basic auth info
class MyURLopener(urllib.FancyURLopener):
def prompt_user_passwd(self, *args):
return ('developer', 'developer')
# The actual extended function definition
def func_get(url):
urllib._urlopener = MyURLopener()
connection = urllib.urlopen(url)
data = connection.read()
if connection.info().gettype() == 'text/plain':
return sub("
$", "", data)
else:
return "#VALUE!"
# The link between the extended function name and the method name
example_functions = {
'py_get': func_get
}
Itās pretty straightforward. Letās just focus on the main part, func_get()
. Because the resource in this example is protected with basic authentication (i.e. you have to supply a username and password), we subclass the standard FancyURLopener to be able to supply the username and password tuple, and then assign an instance of that class to the urllib._urlopener
variable before actually making the call to GET.
If we get some ātext/plainā content as a result, we brush it off and return it to be populated into the cell, otherwise we return a āwarning ā something went wrongā value.
We add the method definition to a hash that Gnumeric reads, and through the assignment, the func_get()
is made available as new custom function py_get
in the spreadsheet. (Thereās also an extra XML file called plugin.xml, not shown here but described in the Gnumeric programming documentation mentioned earlier, that contains the name of the function so that it can be found when the spreadsheet user browses the list of functions.)
So, what does that give us? It gives us the ability to type something like this into a spreadsheet cell (split for readability):
=py_get('http://shrdlu.local.net:8000/qmacro/transport/NW4K900011/as4user')
and have the cell automagically populated with the appropriate data from SAP. You can see an example of this in action in the screenshot:
As you can see, being able to address information as first class web resources opens up a universe of possibilities for the use of real web services.
As a final note, Iāve submitted a SAP TechEd talk proposal. Itās titled:
āThe Internet Communication Framework: Into Context and Into Action!ā
If youāre interested in learning more about the ICF, and want to have some fun building and debugging a simple web service with me, you know where to cast your vote if you havenāt already. Hurry though ā thereās only a few hours to go!
Thanks!
]]>But it wasnāt happening from my NW4 system. So I rolled up my sleeves, and wielded the mighty ā/hā in the ok-code (in the R/2 days we used to call this āhobble modeā :-), cleaving my way into the ABAP that lay beneath OSS1. What I found was quite interesting.
Thereās a command-line program called lgtst
that can be used to query the message server of an SAP system and have information on logon groups and so on returned. This lgtst
program is not, apparently, supported on all operating systems, so thereās a condition in the ABAP that checks that.
If the serverās operating system is not supported, then a simple logon string is constructed from the technical settings held in OSS1 (menu path Parameter -> Technical settings). For example, if you specify a SAProuter at your site thus:
Name: host01 IP Address: 192.168.0.66 Instance: 99
with SAProuter details thus:
Name: sapserv3 IP Address: 147.204.2.5 Instance: 99
and the SAPnet message server details thus:
Name: oss001 DB Name: O01 Instance: 01
then the route string constructed is just the concatenated saprouters leading to the dispatcher at O01ās ā01ā instance, like this:
/H/192.168.0.66/S/sapdp99/H/147.204.2.5/S/sapdp99/H/oss001/S/sapdp01
This route string is then used in conjunction with a direct local call to your SAPGUI client, so that the end result is that a new SAPGUI instance is started for that connection.
So far, so good (or not, depending on your luck with SAProuter routing :-).
Logon group popup, then SAPGUI call
On the other hand, if the serverās operating system is supported, then something rather different happens. In this case, the lgtst
program is executed on the server, to discover what logon groups are available for OSS. How does this happen? Well, the SAProuter information weāve already seen is used to construct a route string:
/H/192.168.0.66/S/sapdp99/H/147.204.2.5/S/sapdp99 ...
but, instead of pointing to a dispatcher at the SAP OSS end:
... /H/oss001/S/sapdp01
it points to system O01ās message server:
... /H/oss001/S/sapmsO01
Once this route string has been constructed, itās used in a call to lgtst
like this:
lgtst -H /H/.../H/oss001/S/sapmsO01 -S x -W 30000
This is basically requesting that the message server for O01 send back information on available servers (instances) and logon groups. A typical reply looks like this:
list of reachable application servers -------------------------------------
[pwdf1120_O01_01] [pwdf1120] [10.16.0.11] [sapdp01] [3201] [DIA UPD BTC SPO ICM ]
[pwdf1302_O01_01] [pwdf1302] [147.204.100.41] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf0936_O01_01] [pwdf0936] [10.16.0.19] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf0810_O01_01] [pwdf0810] [10.16.0.18] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf1307_O01_01] [pwdf1307] [147.204.100.46] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf1300_O01_01] [pwdf1300] [147.204.100.39] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf1301_O01_01] [pwdf1301] [147.204.100.40] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf1177_O01_01] [pwdf1177] [10.16.1.13] [sapdp01] [3201] [DIA UPD BTC SPO ICM ]
[pwdf0937_O01_01] [pwdf0937] [10.16.0.20] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf0809_O01_01] [pwdf0809] [10.16.0.17] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf0808_O01_01] [pwdf0808] [10.16.0.16] [sapdp01] [3201] [DIA UPD BTC SPO ICM ]
[pwdf0807_O01_01] [pwdf0807] [10.16.0.15] [sapdp01] [3201] [DIA BTC SPO ICM ]
[pwdf0392_O01_01] [pwdf0392] [10.16.0.10] [sapdp01] [3201] [DIA BTC SPO ICM ]
[o01main_O01_01] [pwdf1070] [147.204.100.35] [sapdp01] [3201] [DIA UPD ENQ BTC SPO UP2 ICM ]
list of selectable logpn groups with favorites ------------------------------------------------
[1_PUBLIC] [147.204.100.40] [3201] [620]
[2_JAPANESE] [147.204.100.40] [3201] [620]
[DO_NOT_USE] [147.204.100.35] [3201] [620]
[EWA] [147.204.100.40] [3201] [620]
[REPL] [10.16.1.13] [3201] [620]
[SPACE] [10.16.1.13] [3201] [620]
What weāre interested in are the lines in the second half of the output ā the list of selectable logon groups. The key data items here are the group names themselves (e.g. 1_PUBLIC), the IP addresses (e.g. 147.204.100.40), and the port numbers (e.g. 3201). The ABAP behind transaction OSS1 receives this lgtst
output and parses it out into a nice list of groups, which it then presents to the user as shown in the screenshot above.
(And it goes almost without saying that if the call to lgtst
fails, we get that friendly message āUnable to connect to message server (default connection will be used)ā and revert back to the direct SAPGUI call).
So thatās where this popup comes from. Ok. Now I understand. Itās amazing how you use a transaction for years and never really look into how it actually works.
So, just to get back to why I came here in the first place ā why doesnāt this popup appear in NW4? NW4 is a Linux-based testdrive system. lgtst
works fine. But look at this:
*---- Folgende Betriebssysteme werden unterstĆ¼tzt
IF ( SY-OPSYS = 'HP-UX' ) OR ( SY-OPSYS = 'AIX' )
OR ( SY-OPSYS = 'OSF1' ) OR ( SY-OPSYS = 'SINIX' )
OR ( SY-OPSYS = 'SunOS' ) OR ( SY-OPSYS = 'Windows NT' )
OR ( SY-OPSYS = 'Relia' ) OR ( SY-OPSYS = 'SP_DC' )
OR ( SY-OPSYS = 'OS/400' ).
No Linux? Hmm, I soon fixed that, by copying the transaction (OSS1 -> ZSS1) and the ABAP behind OSS1 (RSEFA910), adding a line to this IF statement to bring a bit of love the choice operating system of a āGnu generationā š
Now I can call ZSS1 and delight in the group logon popup. Hurrah!
]]>Ivo Totev gave a keynote today: SAP goes J2EE, and there are three other sessions from SAP people:
I only wish I could have been there.
Nice one, folks!
]]>I made a beeline for the main SAP area in Hall 4, only to be told by someone on the Web AS stand that theyād not heard anything about 640 being available for Linux. Aaargh!
Not to fear, though ā I found out that SAP had a separate stand in the Linux Park over in Hall 6. I legged it over there, to meet Fabrizio from the Linux Lab. And there they were in all their glory: DVDs containing WAS 640, MaxDB 7.5 ā¦ and SAP NetWeaver Developer Studio!
Fabrizio and his colleagues had been busy preparing the packages for CeBIT ā and he gave me a quick demo on the laptop. Nice work, Linux Lab!! Whatās even more special, though, is that this 640 will work on Suse 8.1, Redhat 9.0, and Fedora Core 1. (There may have been another distribution, but I canāt remember). This is great news for those of us who canāt afford to shell out hundreds of euros for some sort of āadvanced serverā edition of a Linux distribution. And Fabrizio has put together RPMs to make the install a breeze. Fantastic!
It was great to see the Developer Studio running on Linux; and it was just as surprising to see how it had been done ā¦ using Wine ā the Windows API implementation for *nix. The reason for requiring Wine is that there are a couple of controls in SAPās Eclipse plugins that invoke an OCX in the background. This means that in certain situations (when developing a Web Dynpro, for example), the plugin on a native Eclipse installation just wonāt work. The Linux Lab chaps are planning to make this port; itās just a matter of tuits.
SOAP sucks!
All in all a very worthwhile visit. I met up with Piers soon after (on the left in this picture)
and we tramped round the halls until our feet were sore and our heads were full. During that time, we found Benny (weād been looking for him). It was great to meet him, we chatted for a while on aspects of J2EE, JNI, Perl integration, and had a great REST vs SOAP ādebateā ā¦ Iām sure Benny has now seen the light ;-). Here's a slightly blurry picture of me and Benny.
The nominal caption is āSOAP sucks!ā š
]]>Timās talk was very interesting, especially coming at this stage in Open Sourceās lifetime (early on). He mentioned afterwards that heās given that talk a few times now, and people are starting to catch on to what heās saying. What is he saying? Well, itās nothing particularly radical, nor is it anything thatās not been discussed before by Tim or others. But what was great about the talk is the way it put all the pieces together, and provided the audience with a view above the parapet.
Tim talked about how the focus is, or should be, moving away from software as a product, and further towards being a commodity. Pieces of software become merely components in undertakings that are larger than the code itself. Citing the usual suspects (Amazon, Google, EBay), he pointed out that what was important today was:
āInteroperability and open data formats may be more important than source code availabilityā
Itās something Iāve said many times before: whatās perhaps even more important than open source is open protocols.
(More pictures from Fosdem 2004 here.)
]]>Iāll be wearing my Programming Jabber tshirt at FOSDEM tomorrow ā so if youāre there too and spot it, come by and say hello.
On another note, Iāve used the power of Galeonās āSmart Bookmarksā to build myself a nice little interface to OSS notes. (Galeon is my Gnome browser of choice, based on Mozillaās rendering engine.)
As you can see from the screenshot, I can get directly to an OSS note by entering the number into a box on my toolbar. Behind this is a URL (split for readability):
http://service.sap.com/~form/handler
?_APP=01100107900000000342
&_EVENT=REDIR
&_NNUM=
to which the entered OSS number is appended. This URL is the one used in the Javascript displayNote() function behind the OSS note quick access form on the main notes page at SAP.
Simple but effective! You might consider building something like this into your browser too.
]]>Nice one, Venus!
]]>FOSDEM is a great grass-roots event that is full of friendly hackers. Add a wonderful city to the mix, and what more do you want? I attended FOSDEM a couple of years ago, when I was invited as a speaker ā my talk was on āUnderstanding Jabber Componentsā, which went down well.
I remember noticing that other speakers in earlier talks were having difficulties with the huge size of the projector displays (they couldnāt reach to point to anything in the top halves of the slides) so just before the start of my talk I rushed outside and grabbed a fallen branch to use as a pointer. It turned out to be a great ice breaker, and rather useful too (Ralph Meijer, whoās done some cool things with Jabber, took the pictures ā thanks Ralph).
In the run up to that conference in 2002, I also introduced OāReilly UK to the FOSDEM organisers, with a view to sponsorship. Seems to have turned out well.
And Iāve just noticed that Dave Cross, a fellow London Perl Monger, is speaking at FOSDEM this time around. (Ok, Iām unfortunately only an occasional monger these days, because Iām never in the right place at the right time). Nice one Dave!
]]>Iāve been baking bread now for quite a while now. The bread baking bug first got ahold of me when visiting friends. They had a bread making machine ā something that you put all the basic ingredients into, hit a button, and presto. Naturally we bought one soon after, and I was, well, "hooked".
But sometime last year I decided there was too much plastic and electronics between me and the bread, and started making it by hand. Just the ingredients, a bowl, time, Radio 4, and me. What a difference. Itās become my number one way to relax ā especially after a session at the keyboard. I love making bread. All sorts. And I also love not going to to the bakers to buy bread. Itās a nano-step closer to self-sufficiency. And a very rewarding one.
So this Christmas I progressed backwards even more. I received a fantastic present ā a grain mill ā Hawoās Queen 1 model. Beautifully simple and rock solid. The millstones are corundum, 10cm in diameter.
As well as giving you the total health benefit that only freshly milled wheat (and other grain) can, milling your own on a loaf-by-loaf basis is fun, and gives me my daily fix for the simpler things in life. I guess the next step is to grow my own. Iām just not sure I have the space!
]]>Actually, as you can see from a few recent posts here, what online time Iāve had has been taken up with SDN, the SAP Developer Network, a new venture from SAP and others to build an online community along the lines of MSDN or OāReillyNet. Forums, developer areas, articles, that sort of thing. Itās certainly a great step in the right direction, but IMO has still some way to go from the usability point of view. You know the sort of thing ā use of frames, unwieldy URLs, web-based forums that are difficult to navigate efficiently (as most web-based forums are) ā a mailing list or NNTP gateway wouldnāt go amiss here ā and so on.
Itās a particular shame about the forums; there are interesting conversations going on there, but itās so hard to get around the messages (āclickā, āclickā, āclickā, āclickā, āerrrā, āclickā, ādamn, now where am I?ā) I simply canāt be bothered to fight to get to the right posts. I guess Iām not just āmodernā enough. Iād already made my concerns known to the powers that be, so at least I have a moral right to go on about it here now ;-)
Iāve started to automatically pull in my SDN weblog posts into the /tech/sap category as thereās a nice RSS feed provided for each weblogger there.
Iāve also written two or three articles so far, the most recent of which:
Set Your WAS 6.10 System To Work ā Transport Tracking with RSS
shows you how to use the evaluationWAS 6.10 system and build a BSP application to provide an RSS feed of your CTS transports so you can track system developments and customisations in the comfort of your own RSS reader. (As you can see, if youāre fortunate enough not to have to use Internet Explorer ā or other browsers on MS-Windows platforms ā youāll see that the conversion of the article into HTML has a few problemettes. Iām reliably informed that the formatting problems will be addressed soon.)
Anyway, my batteryās low, so Iāll stop here. Itās frightening really, once I get round to opening a new blog post in the editor, words just splurge out. I donāt know whether thatās good or bad.
]]>I was there way back when, with colleagues and friends ā among them my SAP hacking partner-in-crime Piers, on the historical occasion of SAPās CeBIT announcement of R/3 on Linux. We were even so geeky as to take a picture recording the event, under the watchful eye of Tux the Linux penguin mascot.
Anyway, one big reason to visit Hannover in March is because SAP is intending to make available a new version of the Linux-based evaluation WAS system, at release 6.40, including the all-important ABAP stack. Thanks SAP, especially the Linux Lab folks and also those at SAP who bore the brunt of my recent emails about this ā you know who you are š
Yippee!
For references, have a look at the comments thread to Visiting SAP NetWeaver Development Nerve Center, specifically this message. Also, Alexander H from the Linux Lab was kind enough to send this reply to an email on the linux.general mailing list.
]]>GET KNA1.
SUMMARY.
WRITE: / ...
GET KNB1.
DETAIL.
WRITE: / ...
EXTRACT ...
When you execute a report that uses a logical database, youāre really just hitching a ride on the back of the database program that actually reads through the logical database youāve specified; your GET statements are reactive, event handlers almost, that do something when passed a segment (ahem, node) of data by means of the proactivePUT statements in the database program (e.g. SAPDBDDF for the DD-F logical database).
Anyway, this brings me to something thatās been floating around in the back of my mind since TechEd last month in Basel. I attended a great session on ABAP Objects, given by Stefan Bresch and Horst Keller (thanks, chaps). In a section championing the explicit nature of ABAP Objects, there was a fascinating example of an implementation of a simple LDB using a class, and using ABAP Object events (RAISE EVENT ā¦ EXPORTING) and event subscriptions to achieve the PUT / GET relationship. Hereās that example.
Thereās the āldbā class that implements a simple database read program for the the single-node (SPFLI) logical database:
class ldb definition.
public section.
methods read_spfli.
events spfli_ready exporting value(values) type spfli.
private section.
data spfli_wa type spfli.
endclass.
class ldb implementation.
method read_spfli.
select * from spfli
into spfli_wa.
raise event spfli_ready exporting values = spfli_wa.
endselect.
endmethod.
endclass.
Here we have a single public method READ_SPFLI that reads the table SPFLI, raising the event SPFLI_READY for each record it finds. This is like the PUT from our traditional database program.
Then we have a report that uses that logical database. Itās also written as a class:
class rep definition.
public section.
methods start.
private section.
data spfli_tab type table of spfli.
methods: get_spfli for event spfli_ready of ldb
importing values,
display_spfli.
endclass.
class rep implementation.
method start.
data ldb type ref to ldb.
create object ldb.
set handler me->get_spfli for ldb.
ldb->read_spfli( ).
display_spfli( ).
endmethod.
method get_spfli.
append values to spfli_tab.
endmethod.
method display_spfli.
data alv_list type ref to cl_gui_alv_grid.
create object alv_list
exporting i_parent = cl_gui_container=>screen0.
alv_list->set_table_for_first_display(
exporting i_structure_name = 'SPFLI'
changing it_outtab = spfli_tab ).
call screen 100.
endmethod.
endclass.
In the START method we effectively are declaring the use of the logical database by instantiating an āldbā object, doing the equivalent of specifying a logical database in a report programās attributes section. Then we define the method GET_SPFLI as the handler for the events that will be raised (SPFLI_READY) when we trigger the databaseās reading with the invocation of the READ_SPFLI method. This of course is the equivalent of a GET SPFLI statement. To initiate the reading of the database we invoke the READ_SPFLI method. Finally thereās a DISPLAY_SPFLI event in the ārepā class using ALV to present the data on the screen.
I donāt know about you, but I was taken aback by the beauty of this. As weāre approaching the weekend, a time to unwind and reflect, I just thought Iād share it with you.
]]>Unfortunately, on going to the website for the first time to activate my account, I received proof that Powergen has its head in the sand (or somewhere else) when it comes to customer service.
Yes, the old classic āyour browser is not supported by this websiteā. Aaargh. Perpetrated by many companies including my mortgage company The Woolwich, who annoyingly refuse to acknowledge me as an online customer because of my choice of browser (with the result that I canāt avail myself of some of the services I need online), this issue shouldnāt exist in this millenium.
Of course, my browser is a nice up-to-date Galeon (an extremely normal browser for the Gnome environment) and not āNetscape 1.3.5ā³ as Powergen seems to think. To Powergenās slight credit, they do at least acknowledge non-MS platforms and browsers. But itās still extremely frustrating. Good grief.
Powergen joins others in the hall of shame in this respect.
]]>decode_url
which shows you what the gunk in the rewritten (mangled) URL actually is. Unfortunately, my free trial WAS system is at release 6.10 and doesnāt contain decode_url
.
āShameā, I thought, first of all. Then: āGreat!ā. A perfect excuse to have a rummage around in the BSPās guts. I was curious as to how this particular thing worked, and spent a pleasant hour or so in my favourite tool, the trusty ABAP debugger (kudos to the debugger team at SAP time and time again!). My aim was to write my own version of decode_url
.
I found a clue in CL_BSP_RUNTIME
ā I knew it had to be somewhere in the BSP classes, and noticed that ON_CHECK_REWRITE
called the suspiciously named CL_HTTP_UTILITY=>FIELDS_TO_STRING
. Following the trail, I eventually landed on CL_HTTP_UTILITY=>STRING_TO_FIELDS
(well, it had to exist, hadnāt it ;-).
After that it was all downhill.
I created a very simple BSP page decode_url.htm
which does the job. Not as pretty as the BSP teamās original decode_url
Iām sure, but hey, itās only for me.
This is what it looks like in action:
Thanks to Brian, I took a small stroll through some of the BSPās guts, and learnt stuff on the way. Iāve always said the best way to broaden your R/3 and Basis skills is to spend an hour debugging an area that interests you, and this time was no exception. So get out your tools and off you go!
]]>As an attendee of other technical events, for example OāReillyās Open Source Convention, Iāve become used to expecting wireless ānet access. One of the great things about attending conferences and meeting like-minded people is that thereās a ton of social interaction and collaboration that goes on in parallel to the actual sessions, presentations and stalls.
With that in mind, and presuming that there will be ānet access, Iāve set up an IRC channel ā#techedā on gnu.pipetree.com (port 6667) ā the IRC server is password protected ā specify ātechedā as the password when you connect. There you can log on and discuss aspects of SAP technology as presented and discussed during the sessions and presentations. The collaboration is facilitated by that killer-app of IRC, the Daily Chump bot. It sits in an IRC channel and helps you collate links and comments into a dynamic weblog. For more information, see the Chumpās documentation (at the link above). An example of the Chump at work can be seen at the RDF Interest Groupās collective blog.
So if youāre interested in finding a place to chat about things and log things of interest with comments, youāre welcome to the #teched channel and the use of the Chump bot.
Any questions, just ask!
]]>I wrote about it after seeing Nat and Miguel (de Icaza) demonstrate it at their keynote at OSCON this year, in this post: Dashboard, a compelling articulation for realtime contextual information.
I even hacked together a Dashboard backend that populated the dashboard with thumbnail pictures of books (from Amazon) when ISBNs were mentioned in conversations. It was my first C# project too ā fun š
Update 08 Sep 2018: This post came up on my āon this dayā radar today, and itās interesting to reflect how this has progressed. Dashboard itself is no more, but the ideas were solid, and in the SAP ecosphere we now have SAP CoPilot, which takes many of the ideas of Dashboard and combines them with conversational UI and more.
]]>Making bread is a pastime (is something more than a pastime when it goes on every day?) that gets easier, more fun, and more interesting the more you do it. Making preserves is a current hit too.
Yesterday I pulled potatoes, carrots, parsnips, pastinaken, and onions from the ground and roasted them with some delicious cumberland sausages from Bury market.
Today I spent a wonderful couple of hours up a tree picking plums. I think it was a sort of meditation. No radio, no walkman, no wifi, no bluetooth, no technology whatsoever, save for a ladder. The more intense things get at work and online, technologically speaking, the more refuge I find in nature and simple ways. It puts things into perspective.
]]>Those were the days.
Anyway, many years later, we still have OSS notes. Higher note numbers to be sure. But has the general OSS notes experience improved? Not that much. While we now also have a web interface (via service.sap.com) in addition to the R/3 system based access to OSS, that web interface could do with some love.
Wouldnāt it be nice to be able to refer to an OSS note, and the noteās sub-sections, via first class URLs? So I could say, in some HTML (in a Wiki, or in a weblog entry, or wherever) ārefer to this noteā and put an HTTP link direct to the note, rather than tell the user how to go through the rigmarole of searching for it and navigating the forest of JavaScript, new windows, and frames, to get to what theyāre looking for? How about something like:
http://service.sap.com/oss/notes/12345
That would be great for starters! For authorisation, how about simple but effective basic HTTP authentication? If youāre going to use the web (HTTP), embrace it, donāt program around it.
And while weāre at it ā how about offering RSS feeds of notes by component? That way, it would be straightforward for people to keep up with OSS info using tried and tested technology, and open tools that are out there right now.
For many SAP hackers like me, OSS is still a very important source of info. Small improvements like this would make our lives a lot more pleasant.
[The concept of a āfirst class URLā is of course from the RESTian (REpresentational State Transfer) view of the web. For more info, see the REST Wiki.]
]]>As itās just down the road from me, I might go. Then again, how much of it is going to be yet more marketing of the NetWeaver flavour? You canāt tell these days. Thereās an interesting couple of things on the agenda:
13:30 Projekterfahrungen zum SAP Web AS
* Ein Bericht des SAP Consulting
and
14:30 Live-PrƤsentation
* Web-Entwicklung mit dem SAP Web AS unter Java und ABAP
so perhaps Iāll go just for the afternoon.
Now, if I can persuade my wife that itās worth the EUR 150 attendance fee theyāre asking. Hmm, if I just go for the afternoon and miss the lunch, perhaps it would be less š
]]>One of the sections in the talk was on producing RSS from R/3. RSS? Isnāt that for weblogs? Sure, but itās a general syndication and metadata format that lends itself to many purposes. In the company where I work, weāve been producing RSS from R/3 for years ā SD business data (sales orders, product proposals, material info).
When you look at RSS from 10000 feet, itās pretty obvious why it lends itself so well to SAP data; the core document model is the same as the core document model in R/2 and R/3, namely a header and a number of positions, each of which can be embellished with domain-specific and compartmentalised data. And more recently, other people have been catching on to using RSS for business data. When you think about it, itās a no-brainer. The most interesting news ā just this week, is that Amazon is now offering RSS feeds for all sorts of business data. The penny is dropping, finally.
Here are a couple of recent articles on RSS and extensibility:
]]>It was a very interesting time. Rather than mainly authoring, most of my work was editing, restructuring, and adding some new content. In the past, I have denied the existence of a somewhat strenuous attention to prose detail, but I guess I finally have to admit that itās there. I really enjoyed the challenge, although it was hard work using a combination of Open Officeās word processing program and MS-Word. Give me DocBook and a proper editor any day (I wrote Programming Jabber this way).
]]>It was lovely to meet old friends and make new acquaintances. Amongst others, I met some of the Jabber guys (pic), plus Matthew, Steve, Gnat (and family), Paul (pic), Randy, Edd, Dave, Rael, Christian, Geoff, Tom, Joe, Leon, Ask, James, and plenty of other Perl and OāReilly folk. I even managed to say a brief āhiā to Nat.
Even when we werenāt having fun, we were having fun. The author signing event was great; Piers and I were drinking beer to celebrate the end of our talk (which had just finished) when we were snapped.
Our talk had included live demos against an SAP R/3 system, which I was running on the diminutive Sony Vaio laptop (128Mb RAM, PII-233, 12Gb HDD) that you can see in the picture. While preparing the system the day before in the speaker room, the work processes decided for some reason to recompile all the ABAP components, which almost killed the laptop. The HDD went mad for minutes on end, and made funny noises, which Graham promptly likened to the sound of a deep fat fryer in action. Iām thinking of renaming the laptop to āchip-pan.local.netā.
]]>Stonehenge is hosting a post-OSCON free beer and games afternoon/evening event here in Portland; the place is packed and everywhere I hear the sounds āoooh, I remember thisā or āaah, I used to be good at this gameā from people rediscovering Galaga, Donkey Kong, Centipede and many other classic computer games from the 1980s.
I discovered Perl, and subsequently the power of Open Source, through Randal. Way back when, I discovered Randalās magazine columns on Perl. I regularly printed a column out, and took it to lunch with me to study. Getting back to the office, I used to enthuse about what Iād just learnt about āthis new languageā to my work colleagues (including Piers). I got to know Perl well, and havenāt looked back.
Thanks, Randal.
]]>I discovered dashboard this week thanks to Edd, who has been doing some neato hacking with some dashboard front and backends already. Dashboard shows itself as a little GUI window on which information sensitive to what youāre currently doing (receiving an IM message, sending an email, looking at a webpage, for example) is shown.
The heart of dashboard is a matching and sorting engine that receives information (in the form of ācluepacketsā ā how evocative is that?) from frontend applications (like your IM and email clients) and asks the plugged-in backends to find stuff relevant to that information, which is then displayed in the sidebar-style window, designed to be glanced at rather than pored over. Itās a lovely open architecture in that you can (build and) plug in whatever frontend or backend lumps of code you think of.
Iāve been musing about an SAP backend ā wouldnāt it be interesting if the engine could get a match from R/3 on a purchase order number, for example? Of course, thereās nothing out of the box on the R/3 side that could be used, but as our talk at OSCON (hopefully) showed, there are plenty of opportunities for the wily hacker.
And what about Jabber? While glueing Jabber stuff onto the front end is one thing, building a pubsub style Jabber backend could get really interesting; coordinated matching, CRM style features ā¦ Ooo, the world definitely could get very lobster-like.
And I know it annoys Nat, but I just had to point out that the GraphViz output for matching clues looks very arc-and-nodey ā¦ and we all know what thatleads to :-)
]]>Beer may be involved, too!
]]>Does it have to be this way? We have airports and security in Europe too, you know. But what we also have is a sense of politeness and courtesy and the willingness to treat people like, well, people.
Of course, it goes without saying that the sweetness comes from the excellent time I know Iām going to have this week with everyone at OSCON. Iām sitting here right now in the hotel lobby and I know a week-long brane-melting experience awaits me!
]]>But secondly, and more importantly, whatever happened to knowledge and discourse for its own sake? From studying RDF, for example, even at the fairly superficial level that I have, Iāve exercised my mind thinking about hard questions of language, expression, relationships, identity and semantics. While the concept of a Semantic Web platform is simple (a vast homogenous database spanning the world of information), its nuts and bolts, the substructure of concrete, steel and ontological rivets are submerged under a sea of meaning, nuance and interpretation.
- Anyone can say anything about anything *
Thinking about this stuff is rewarding. Did you go to college and learn only about stuff that directly related to the job you do now? No, I didnāt either. I may have written one or two Latin comments in my code in the past, but thatās as far as it goes :-)
Iām grateful to all those people (the REST, #rdfig and #foaf people, plus people at the W3C and elsewhere) for being ever helpful, friendly, and enthusiastic in sharing their knowledge of such interesting topics.
]]>foaf:Person foaf:mboxdj.adams@pobox.com</foaf:mbox> ... </foaf:Person>
Actually, while I think on, why not:
foaf:Person <foaf:mbox rdf:resource="mailto:dj.adams@pobox.com" /> ... </foaf:Person>
Anyway. The idea is that rather than refer to a person directly, we refer to them indirectly: āThe person with the email address dj.adams@pobox.comā. Why do this? Well, for one thing, an email address is a fairly unambiguous property ā thereās usually the same person consistently to be found behind an email address. The FOAF spec uses DAML to annotate the mbox property as being unambiguous (you can see this in the RDF version of the spec).
In the arcs and nodes world of RDF, it would look something like this:
+----------+ +---| | | +----------+ mbox | | V +---------------------------+ | mailto:dj.adams@pobox.com | +---------------------------+
The box at the top represents the person, and is a blank node, in that it doesnāt have a (direct) identifier. The uniqueness is indirect
.
Then I came across Mark Bakerās FOAF file, which starts:
<foaf:Person rdf:about="http://www.markbaker.ca/"> foaf:nameMark Baker</foaf:name> ...
Whatās this? Does this mean that the HTTP URI http://www.markbaker.ca represents Mark? (What does ārepresents Markā mean anyway?) We know about REST, and representations of resources that can be retrieved via HTTP URIs. If I specify a MIME type of ātext/htmlā when asking for a representation of the resource at that URI, I am sent some HTML (Markās home page). I wonder what MIME type Iād have to specify to get Mark himself disassembled into IP packets and reassembled next to my laptop? Of course, before you say anything, this is one of the differences between URIs and URLs, and I wonāt expect to see Mark any time soon :-) Plus, thereās the concept of identity which must stand alone from the concept of resources and representations ā¦ if Mark comes down the wire, am I getting a representation of Mark, or Mark in person? At least I might get a picture of him if I specify āimage/*ā.
In any case, Mark does assert that the URI does identify him, the person. Very interesting. Mark pointed me to an item on Norman Walshās weblog which touches on this subject.
So, what does, or could, an HTTP URI represent? Leigh Dodds recently expressed a desire to detail aspects of his life in RDF (and I like the idea of the Semantic Webās āyear zeroā that he mentions). Films heās seen, books heās reading, and so on. Great!, I thought, and immediately perused Erik Bensonās allconsuming.net API documentation (thereās a RESTful way of getting the book data too, now) ā I could pull out the data from there and construct some RDF statements about the CurrentlyReading book information.
But before I started, I went all philosophical and thought about representations and abstractions for a bit; at least, as much as my limited knowledge would allow. Iād been thinking that the currently reading information might come out like this:
foaf:Person <foaf:mbox rdf:resource='mailto:dj.adams@pobox.com' /> <books:currentlyReading rdf:resource='http://allconsuming.net/item.cgi?isbn=0596002025' /> ...
But surely that says that Iām currently reading the allconsuming.net page for that book, not that book itself? Itās not a question of unique identity, as the ISBN in the URI disambiguates. Itās a question of what the URI represents. How do you refer to the book itself ā the abstraction (funny how āabstractā actually means ārealā here)? Perhaps here, as in FOAF, a level of indirection could be used:
foaf:Person <foaf:mbox rdf:resource='mailto:dj.adams@pobox.com' /> books:currentlyReading books:Book <books:describedAt rdf:resource='http://allconsuming.net/item.cgi?isbn=0596002025' /> ... </books:Book> ...
In other words, Iām currently reading the book thatās described at that allconsuming.net page. Seems fair. And this is what it looks like:
+----- Person | ------ +-------+ +-----| |----+ | +-------+ | mbox | | | | V | currentlyReading +---------------------------+ | | mailto:dj.adams@pobox.com | | +---------------------------+ | +----- Book V | ---- +-------+ +----| | | +-------+ describedAt | | V +--------------------------------------------------+ | http://allconsuming.net/item.cgi?isbn=0596002025 | +--------------------------------------------------+
On the subject of identification and HTTP URIs, Tim Berners-Lee wrote a paper āWhat do HTTP URIs Identify?ā where he discusses various angles on the difficulty regarding resources, identification and the real world. The paper refers to, and stems from, discussion on this in the httpRange-14 issue in the TAG issues list.
In the book diagram above, Iāve included little class annotations for each of the two blank nodes (Person and Book). I wonder if, at least in RDF, classes can be used effectively to draw a distinction between Mark Baker and his home page? In other words, the snippet of Markās FOAF data:
<foaf:Person rdf:about="http://www.markbaker.ca/"> foaf:nameMark Baker</foaf:name> ...
which is really shorthand for:
<rdf:Description rdf:about="http://www.markbaker.ca/"> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person" /> foaf:nameMark Baker</foaf:name> ...
says that the resource at http://www.markbaker.ca is a Person.
By the way, Mark is not alone in identifying himself, a person, with an HTTP URI. The RDF Primer does the same thing in an example (this time in N-Triple format):
http://www.example.org/index.html http://purl.org/dc/elements/1.1/creator http://www.example.org/staffid/85740 .
This says that the document at http://www.example.org/staffid/87540 created the document at http://www.example.org/index.html.
Or does it?
]]>Integrating SAP R/3 and Open Source & Open Protocols
last thing on Wednesday the 9th to find out about āextending and embracingā R/3 with open source tools and protocols. The more the merrier!
]]>IMO making and maintaining DBMS products isnāt one of SAPās core business drivers. Never was, never should be. SAPās strengths lie in a combination of building good application code, application development infrastructures, and abstraction layers for underlying common technologies like databases, spool mechanisms, TP monitors (ok this was more an R/2 thing) and so on, so that their application and technology products run on lots of platform / software combinations. SAP rescued ADABAS D (and renamed it SAP DB). I think that was a good move. Theyāre now sharing the technology and their support to an open source DB vendor with a good name.
As long as the relationship remains open source, what can be seen as bad about the partnership?
]]>A common way to do this is to prefix the plugin filenames with digits, like this:
00pluginY 01pluginX
and so on.
]]>Check the filename and remove any offending characters.
]]>http://www.example/food/italian/
should display any readme or readme.html dropped into
$datadir/food/italian
]]>use CGI qw/:standard/; $url = url(); $path_info = path_info()
You, unfortunately, canāt get to the #entry bit since thatās never sent to the Web server. Thatās handled by the browser alone.
]]>$plugin_dir/lib/Text/Tiki.pm
]]>keys %blosxom::plugins
All āonā plug-ins:
grep {$blosxom::plugins{$_} > 0} keys %blosxom::plugins
And if youāre interested in the order:
print join ', ', @blosxom::plugins
]]>I think itās going to be a lot of fun.
]]>Wot hapen when nigel molesworth, the curse of st custards, find himself at hoggwarts skool for WITCHCRAFT and wizzardry? Read on!
A must-read. It brings back many happy memories for me and Iām sure tons of other people of my generation. Molesworth is the creation of Geoffrey Willians and Ronald Searle, who wrote the Down With Skool! collection, as any fule kno!
]]>$blosxom::plugins{'smartypants'} = 0;
]]>I decided to expand on this lovely idea, by adding some more functionality to the plugin. With my expanded version of āwikiwordishā (diff here) it is now possible to have InterWiki style links automatically recognised and expanded too. (I also made a modification to the regex in story(), as it wasnāt behaving quite right). So I can refer to the, say, StartingPoints page of the MeatBall Wiki by using a link in my weblog entry like this: [[MeatBall:StartingPoints]], which would be turned into a link like this: [[MeatBall:StartingPoints]].
The way it works is simple: you tell the plugin where to find an InterWiki āintermapā file, which contains a list of InterWiki names and URLs. You can probably find this somewhere in your wiki installation. You can also add your own name/URL combinations in the configuration in case youāre not allowed to edit the intermap file; in my setup Iāve added the name āPipeSpaceā to refer to my MoinMoin-powered space Wiki (see the āConfigurable Variables sectionā in the code), so I can now create a link such as this: [[PipeSpace:AllConsumingRestIdeas]] which is turned into this: [[PipeSpace:AllConsumingRestIdeas]]. If you donāt want an icon to appear next to the link, you can turn that off in the configuration.
Whatās more, some standard InterWiki links are not to wikis, but to other popular sites; for example [[IMDB:0088846 Brazil]] gives [[IMDB:0088846 Brazil]], and [[Dictionary:alliteration]] gives [[Dictionary:alliteration]].
Fun! Hereās to more weblog/wiki fusion.
]]>The idea is that you can have Blosxom accept submitted entries and treat them as āpendingā, using a ā.txt-ā file extension, so theyāre not immediately viewable in the weblog output. You can then review the entries and publish them by changing the extension to ā.txtā (or not, as the case may be).
The mechanism will kick in in one of two modes:
Thereās also a separate directory where you can add your own formatters; this is the āformatlibā that should be created in the plugin directory itself. Iāve written a simple formatter that lives in this directory, called āpluginā that accepts a plugin submission (name, category, URL, description, author) and formats it into an entry (body) style similar to those shown in the registry at the moment.
Itās a basic bit of code, works for me. Iāve already got a few mods in mind, such as, perhaps, accepting payloads in other formats such as an RSS item (it could then be parsed and appropriately formatted by an RSS-item-aware formatter). Thatās for later, though.
]]>A: When itās an RSS feed.
Iāve pondered the relationship between weblog and RSS before, and in an Old Speckled Hen-induced philosophical state of mind, have decided for experimental purposes that for all URI intents and purposes they are one and the same.
With that in mind, my thoughts turned (naturally) to connection negotiation, or āconnegā. My weblog, whether HTML or RSS, is my weblog. Same thing, different representation. So perhaps both representations should actually have the same URI, /. Clients could use conneg to specify which representation they wanted, for example:
RSS 0.91:
[dj@cicero dj]$ GET -H"Accept: application/rss+xml" -Use /
GET //qmacro.org/about
Accept: application/rss+xml
200 OK
Content-Type: application/rss+xml
<?xml version="1.0"?>
<!-- name="generator" content="bloxsom" -->
<rss version="0.91">
<channel> <title>DJ's Weblog</title> ...
[dj@cicero dj]$
Or RSS 1.0:
[dj@cicero dj]$ curl -H"Accept:application/rdf+xml" /
<?xml version="1.0"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" ... >
<channel rdf:about="//qmacro.org/about">
<title>DJ's Weblog</title> ...
[dj@cicero dj]$
Or even simply HTML:
[dj@cicero dj]$ GET -Use /
200 OK
Content-Type: text/html; charset=ISO-8859-1
<title>DJ's Weblog</title> ...
[dj@cicero dj]$
In other words, specify what representation you want in the Accept header. Hereās a quick summary of how (90% of) the Accept: header is used:
As an HTTP client, you say what media types (which roughly translates to ārepresentationsā here) youāre willing to accept for a given resource (URI). You can specify multiple media types, and with the aid of a sort of ranking mechanism, you can say which media types you prefer over others, if given the choice. You do this by assigning values, so that āapplication/rdf+xml, application/rss+xml;q=0.5, /;q=0.1ā means āIād love application/rdf+xml, but if you havenāt got that, then send me application/rss+xml; failing that, anything will do. The values used are between 0 and 1 (in ascending preference), any media type without a value is assumed to have a value of 1.
So, as a first offering to the Blosxom plugin love-in, I wrote conneg, a plugin with which you can determine the flavour required according to the HTTP Accept header. Hereās how it works:
As you can see from the code, the plugin takes into account what content-types youāve specified in the ācontent_type.flavourā files in your blog hierarchy.
Note I said ānew plugin eventā. There are a number of standard plugin hooks in Blosxom (2.0 beta3). For this āflavourā plugin to work, Iāve added another hook thus:
[dj@cicero blosxom_2_0_beta]$ diff blosxom_2_0_b3.cgi blosxom_2_0_b3.cgi.dj
208a209,211
> # Plugins: Flavour
> map { $_->can('flavour') and $_->flavour() } @plugins;
>
[dj@cicero dj]$
This is in the āDynamicā section of the code.
Iāll run this new plugin hook past Rael shortly. Itās a sort of chicken and egg situation ā I canāt explain the reason for the patch until Iāve done it and written about it. Rather like conneg and weblogs, perhaps. RSS aggregators might not start doing conneg until weblog RSS content is available by that method, and thereās little incentive if no-oneās asking for it. So I thought Iād make a move. Experimental, mind you.
]]>Iām a keen user of the Python-based MoinMoin wiki (especially at work, where we manage our internal documentation and work collaboration with it), and the ānatural environmentā for a wiki-like markup language is ā¦ in a Wiki. So I decided to mix up a bit of glue; I stuck Timās Perl Text::Tiki module into the Python MoinMoin wiki mechanism by writing a very quick and dirty parser, tiki.py. Now I can practice the TikiText markup in my favourite Wiki environment; all I need to do is use a
#format tiki
declaration at the top of a Wiki page to have the glue kick in.
You can see it in action in the demowiki, specifically the TikiTest page. Have a look at the source (with the EditText link) to see the TikiText format.
Fun!
]]>Congratulations to Rael in releasing the plugin-enabled 2.0 Beta1 of Blosxom. I dropped it into my cgi-bin directory, tweaked a few things, and it worked like a dream.
One of the plugins available already is RSS 1.0 plugin, which Iām now using to generate RSS 1.0 ā see the Syndication page for details. This means I can stop using the old XSLT-based mechanism. Another is the Foreshortened plugin which Iām also using to have a short description generated for the
One thing that strikes me as interesting is the angle in the plugin documentation which encourages plugin developers to respect the Zen of Blosxom and keep its users and platforms (Linux, OS-X and MSWindows) in mind when developing. Itās a refreshing and positive call for simplicity.
]]>Following itās sibling Blosxomās philosophy of simplicity and reuse of existing tools, Blagg uses āwgetā (or ācurlā) to make the HTTP call. Adding the appropriate option to the string in $get_prog, e.g. by changing from this:
my $get_prog = 'wget --quiet -O -';
to this:
my $get_prog = 'wget -U 'blagg/0+4i (wget)' --quiet -O -';
was all that it took.
(In fact, personally Iām using my ETag-aware version of wget so I made the change in that small script, wget.pl rather than in Blagg itself.)
]]>This week, Ben had mentioned the Panopticon in reference to the forthcoming ETCON. During last yearās, I had hacked around with the Panopticon, creating a sort of Jabber-based information diffusion service to lighten the load on the Panopticon mechanismās single source socket.
With all the talk of lightening the load from RSS consumers, my thoughts turned from these Panopticon experiments to NNTP, as of course itās a technology that is designed for information diffusion, and bearing and sharing load. I couldnāt resist a bit of tinkering with NNTP, partly to follow up a little bit myself on RSS to/via NNTP, but mostly in fact to re-acquaint myself with the wonderfully arcane configuration of the majestic beast that is inn. In addition, thereās been talk recently of aggregators moving out of the realms of satellite applications and into the browser itself. The Blagg and Blosxom powered Morning Reading page ā my personal (but open) news aggregator ā is already web-based, so I thought Iād have a look in the other direction.
Aided partly by Jonās Practical Internet Groupware book and partly by the man pages, I put together a simple configuration for a server that I could locally post weblog posts to as articles.
As I saw it, there are two approaches to newsgroup article creation in this context, and each has its pros and cons.
The plugin is called nntp. I modified Blagg slightly so it would pass the nickname to the plugin. My version of Blagg 0+4i is here (it has a number of other modifications too). Feel free to take the plugin and modify it to suit your purpose. It was only a bit of twiddling, but it seems to work.
There are plenty of possibilities for experimentation: combining the various weblog trackbacking mechanisms with NNTP article IDs to link articles together in a thread; replying (to the newsgroup) to an article might send a comment to the post at the source weblog. Hmmmmā¦
]]>I hacked up a very simple module, WWW::Amazon::Wishlist::XML (keeping to the original namespace in CPAN) which acts as an Apache handler so you can plug your wishlist ID (mineās 3G7VX6N7NMGWM) in and get some basic XML out, in a simple HTTP GET request.
Hereās an example:
http://www.pipetree.com/service/wishlist/uk/3G7VX6N7NMGWM
Note the āukā part in the path. It signifies that the wishlist is held at amazon.co.uk. If held at amazon.com, specify ācomā, like this:
http://www.pipetree.com/service/wishlist/com/11SZLJ2XQH8UE
It uses the patched version of WWW::Amazon::Wishlist so should be ok for now with .com-based wishlists too. Of course, itās experimental anyway (as are most of the things I post here) and is likely to explode without warning.
]]>While lamenting the fact that retro-fitting like this is like trying to put a wave into a box, Iāve made a second patch to the module (the $url regex) so it can successfully find ASINs in U.S. wishlists too.
I wonder when/if we will see consumable wishlist data available directly from Amazon, a la AllConsumingās XML directory ?
]]>So I hacked up a few scripts, and here are the results.
Getting my wishlist
Using Simon Wistowās very useful WWW::Amazon::Wishlist, it was a cinch to grab details of the books on my wishlist. (I had to patch the module very slightly because of a problem with the user agent string not being set).
The script I wrote, wishlist, simply outputs a list of ISBN/ASINs and title/author details, like this:
[dj@cicero scraps]$ ./wishlist 0751327824 The Forgotten Arts and Crafts by John Seymour 090498205X With Ammon Wrigley in Saddleworth by Sam Seville 0672322404 Mod_perl Developer's Cookbook by Geoffrey Young, et al 0465024750 Fluid Concepts and Creative Analogies: Computer Models of ... 0765304368 Down and Out in the Magic Kingdom by Cory Doctorow ...
Interacting with AllConsuming
While Iām sure the allconsuming.net site and services are going to morph as services are added and changed, I nevertheless couldnāt reist writing a very simple Perl class, Allconsuming::Agent that allows you to log in (logIn()) and add books to your collection (addToFavouriteBooks(), addToCurrentlyReading()). Itās very basic but does the job for now. It tries to play nice by logging you out (logOut()) of the site automatically when youāve finished. It can also tell if the site knows about a certain book (knowsBook()) ā I think AllConsuming uses amazon.com to look books up and so the discrepancies between that and www.amazon.co.uk, for example, show themselves as AllConsumingās innocent blankness with certain ISBNs.
Anyway, Iām prepared for the eventuality that things will change at allconsuming.net sooner or later, so this class wonāt work foreverā¦but itās fine for now.
Adding my wishlisted books
So putting this all together, I wrote a driver script, acadd, which grabs my current reading list data from AllConsuming, and reads in a list of ISBN/ASINs that would be typically produced from a script like wishlist.
Reading through the wishlist book data, acadd does this:
Hereās a snippet of what actually happened when I piped the output of the one script into the other:
[dj@cicero scraps]$ ./wishlist | ./acadd 0751327824 The Forgotten Arts and Crafts by John Se... [UNKNOWN] 090498205X With Ammon Wrigley in Saddleworth by Sam... [UNKNOWN] 0672322404 Mod_perl Developer's Cookbook by Geoffre... [HAVE] 0465024750 Fluid Concepts and Creative Analogies: C... [HAVE] 0765304368 Down and Out in the Magic Kingdom by Cor... [ADDED OK] ...
Woo! Coryās new book, appearing on my Amazon wishlist, was added to my allconsuming.net collection. (In case youāre wondering, I am only adding books like this to āCurrently Readingā, rather than any other collection category, temporarily, as right now only the books in this category along with the āFavouritesā category can be retrieved with the SOAP API ā and itās upon this API that booktalk relies.)
Anyway, itās late, time for bed, driving to Brussels early tomorrow morning. Mmmm. Belgian beer beckons!
]]>Cut to the present, and Piers and I are thinking about a joint conference presentation. While the presentation format is not in question (HTML), Iāve been wondering how I might investigate these link rel=āā¦ā tags further, learn some more about wikis, and have a bit of fun in the process.
While HTML-based presentations are nice, something that has always jarred (for me) has been the presence of slide navigation links within the presentation display itself. Whether buttons, graphics, or hyperlinks, they invariably (a) get in the way and (b) can move around slightly with layout changes from page to page in the presentation.
I wanted to see if I could solve this problem.
The MoinMoin Wiki (which I use for documenting various things) generates link rel=āā¦ā tags for each page, to point to the āFront Pageā, āGlossaryā, āIndexā and āHelpā pages that are standard within that Wiki. The Wiki markup includes processing instructions that start with hash symbols (#), to control things like whether section and subsection headings should be automatically numbered or not, and so on. The name/value style directives are known as āpragmasā.
What I did was to hack some of the MoinMoin (Python code) (a few lines added only) so that I could
That way, browsers aware of these tags (including my browser of choice, Mozilla), can display a useful and discreet navigation bar automatically. Problem solved!
I tweaked two MoinMoin files, Page.py and wikiutil.py. It might have broken something else, you never know. Itās just a little hack. Also, so that you can get a feel for what I mean, have a browse of these few presentation demo wiki pages with your browser site navigation support turned on and/or visible. Use the EditPage feature to look at the markup source and see the #pragma directives. (Please donāt change anything, let others see it too ā thanks).
So hurrah. We can build, present, and follow up on the presentation content in the rich hypertextual style that HTML and URIs afford, and collaborate on the content in the Wiki way.
On an incidental note, Iāve also added a link rel=āstartā tag to point to the homepage of this weblog. This is made available in Mozilla as the āTopā button in the site navigation bar.
]]>Unfortunately it broke the feed, in that none of the content was being entity-escaped (escaping of entities in RSS is of course a whole different story which Iāll leave for now). Blosxom decides whether to do entity-escaping if the content-type is ātext/xmlā. So I made a quick fix to the check, so that the content of any flavour whose content-type was anything ending in āWxmlā would be entity-escaped.
Funnily enough, I was only recently talking about link rel=āā¦ā tags in Presentations, Wikis, and Site Navigation last night.
So apologies for those people whose readers may have choked on unescaped content for the past few hours from this site.
]]>But for those (including me) who (also) have a REST bent, there is also a tip-oā-the-hat style flavour that has interesting possibilities. The (readonly) methods are also available as URLs like this:
http://allconsuming.net/soap-client.cgi?hourly=1
or
http://allconsuming.net/soap-client.cgi?friends=1&url=//qmacro.org/about
where the methods are āGetHourlyList()ā (hourly=1) and āGetFriends()ā (friends=1) respectively.
While the actual data returned in the message body is clearly Data::Dumpered output of the data structure that would be returned in the SOAP response, a slight change on the server side to produce the data in āoriginalā XML form would be very useful indeed for pipeline-style applications, perhaps.
Erik is using these URLs to show readers examples of response output. But I bet the potential diplomacy wasnāt lost on him.
]]>Application data and RSS is something that Matthew Langham touched upon last December. And of course, this isnāt just hot air, weāre already generating RSS out of SAP R/3 at work for a sales (SD) application.
]]>At the other end of the spectrum, enter Erik Benson and his creation allconsuming.net, a very interesting site which builds a representation of the collective literary consciousness of the weblogging community by scanning weblog RSS feeds for mentions of books (Amazon and other URLs, specifically ISBN/ASINs) and collating excerpts from those weblog posts with data from other web sources such as Amazon and Google. Add to that the ability to sign up and create your own lists of books (currently reading, favourites, and so on) and you have a fine web resource for aiding and abetting your bookworm tendencies.
A fine web resource not only for humans, but as a software service too. In constructing allconsuming.net, Erik has deliberately left software hooks and information bait dangling from the site, ready for us to connect and consume. Moreover, he encourages us to do so, telling us to āUse [his] XMLā and try out his SOAP interface.
So I did.
While allconsuming.net can send you book reading recommendations (by email) based on what your friends are reading and commenting about, I thought it might be useful to be able to read any comments that were made on books that you had in your collection. āIāve got book X. Let me know when someone says something about book Xā.
So I whipped up a little script, booktalk, which indeed uses allconsuming.netās hooks to build a new service. What booktalk does, crontabbed on an hourly basis, is to grab a userās currently reading and favourite books lists and then look at the hourly list of latest books mentioned. Any intersections are pushed onto the top of a list of items in an RSS file, which represents a sort of ācommentary alertā feed for that user and his books. It goes without saying that the point of this is so that the user can easily monitor new comments on books in his collection by subscribing to that feed, which, aggregated by Blagg and rendered by Blosxom, would look something like this.
Of course, the usual caveats apply ā itās experimental, and works for me, your mileage may vary.
]]>With the power of Blosxom, I managed to do it in 10 minutes. Using the āflavourā templating mechanism, I created a new flavour ātitlesā which I can then specify (//qmacro.org/about/?flav=titles&num_entries=100) when calling Blosxom to run over my file store. Wonderfully simple. And while I could put together a little mechanism to statically build such a list every 10 minutes or something (to save the CPU hit), I donāt want to, and donāt have to now that Iām not trying to render everything on the main weblog page.
The ātitlesā flavour files are here.
There was a bit of jiggery-pokery I had to perform to make it work how I wanted. First, Blosxom makes a decision on whether to insert the day/date subtitles in a weblog display based upon what content-type the flavour is. Because it decides to insert such subtitles when it sees ātext/htmlā, and I donāt want Blosxom to insert the subtitles in the entry index, I set the content-type for the titles flavour to be ātext/html;ā.
Second, the number of entries that Blosxom displays is governed by a parameter in the code. But I wanted a different number of entries in the index than in the main weblog display. So I added a line:
param('num_entries') and $num_entries = param('num_entries');
near the start to allow me to pass the value in the URI.
Hey, it works, ok?
The ātitlesā flavour files are here.
]]>Anyway, I wanted to start out on a fresher, less cluttered approach to the weblog mechanism. So here it is. Iāll probably add things to it gradually over the weeks and months, but I thought Iād go back to basics and remind myself of how simple HTML, HTTP and URIs can be. I also wanted to get away from the problem of trying to fit everything onto one page. Add to that the fact that Iām no great artist (my main diagram medium at work is still ASCII art lines and boxes) and Iām actually more comfortable with this simple layout compared to the previous one. Funnily enough, it looks like Mark has been redesigning and simplifying too.
Iām using the 0+6i beta 2 version of Blosxom, with the anticipation of moving to 0+7i when itās ready. (Itās not unlikely that Iāve broken things in this rejig ā please let me know if I have ā thanks!)
]]>āInteresting!ā I think, as I go to their site and browse the foundersā resumes, where I find something rather disturbing. What I find is that various US Patents are being paraded. These patents seem to be predominately for software, methodologies, rather than inventions. Picking one at random, US6226654: Web document based graphical user interface ā this seems to be using specific web technology components (HTML forms and graphics, for example) for exactly the use they were originally intended (GUI-in-browser). How the heck can you patent that?
Call me naive, call me an old fogey, but I do question the use of patents for programs and applications of software. At least the Strangeberry people arenāt trying to keep them a secret (in fact, quite the opposite!)
]]>Whatās really interesting is that a pattern is emerging. The interface description table in my āworking notesā (aka final documentation :-) that Iāve written to describe the details of the latest project bear a remarkable resemblance to the table in the RESTful RT experiment and also the one in Joeās RESTlog interface. For each interaction, they each roughly show:
Incidentally, Piers (my partner in code crime) has just written about the client-end of one of the RESTful projects at work.
I discovered a nice RESTful bonus when doing the documentation too ā I could link directly to the URLs of some of the services from within my (HTML/Wiki-based) documentation, to show examples. Thatās turning out to be very useful.
]]>I just read (via Der Schockwellenreiter) about another potential assault (albeit seemingly well-meant) on the same. Wired reports on a charging plan for spammers, with mechanisms that would allow genuine emails to get through.
I donāt know what the answer is. For now, Iām happy, having installed the excellent SpamAssassin and enjoying virtually spam-free email bliss.
]]>Lots to catch up on indeed. Seems the extended community continues to be busy, innovative, and still rather passionate about things.
Anyway, as a starter for ten, Iāve dusted off a little abandoned project that I started shortly before OSCON this year and talked about it a bit there (on RESTifying RT). Iāve written up a few notes with a view to (a) crystallizing my thoughts and (b) thinking through the API I had (and bits Iāve added) so I can write a cleaner implementation. Maybe.
]]>What should the value of the rdf:about be? The URI of the RSS file itself or the URI of the document that the RSS is describing? This is not a new question ā itās been debated already, and Iām not trying to dredge up the issues in any particular way now; I just want to get my (seemingly random) thoughts down here (weblogging for me is a great framework in which to marshall my thinking, which Iām trying to make the most of in this period of infrequent connectivity).
There are some arguments for the value of rdf:about to be the URI of HTML document (Iām using HTML examples in this post for simplicityās sake), and others for it to be the URI of the RSS document itself. Hereās what Iāve been thinking:
Thoughts against the value being the RSS URI:
Thoughts in favour of the value being the RSS URI:
One interesting thing to note concerning the ārepresentationā question, and in relation to the HTML <link rel=āā¦ā ā¦ /> construction to
is that different possible values for the ās type attribute show both alternatives: for example the value āaltā suggests an alternative representation, whereas the value āhomeā suggests a separate resource altogether
.
Regarding the content negotiation comments, and the HTTP headers that are employed, I am reminded of other thoughts about RDF and resources in general. A question I had (well, still have) is āWhere are the statements about the statements?ā Yes, I guess Iām talking about reification, but not in the specific technical sense. Iām more interested in this: given a resource, how do I know where the (RDF) statements are that describe it? Appendix B āTransporting RDF in the RDF Model and Syntax (M&S) Specification describes four ways of associating descriptions with the resource they describe ā āembeddedā, āalong-withā, āservice bureauā, and āwrappedā. Iāve been thinking of the pros and cons of supplying an HTTP header when the resource is retrieved, like this:
X-RDF: (URI-of-RDF-file)
Now this isnāt a statement about a statement in the RDF sense, but it sure tells you where the statements about a resource are to be found. Hmmmā¦
Anyway, back to what the value of rdf:about should be. I think the difficulties and questions arise because of the special relationship between RDF and RSS that I mentioned last time. Perhaps because other (non-RDF) RSS formats exist, the RDF and RSS are seen as separate things, so the RSS is a valid candidate for description (by RDF). This becomes meta meta data ā a description of the description. Hmmm. One perverse extrapolation of this (taking the fact that RSSis RDF, and considering rdf:about=āURI-of-RSSā) is that all RDF files would be written to describe themselves, and not the actual original resource. Perhaps what Iām trying to say is that this crazy scenario is an argument against having the RSS URI in the rdf:about.
So, whatās the answer?
I donāt think there is a definitive one, apart from āwhatever makes more sense for your particular instanceā. I donāt know if this is right or not, but it sure is stimulating.
One last question to bring this rambling to an end. Whatās the semantic difference between specifying the RSS URI for rdf:about in an RSS file, and specifying blank (i.e. rdf:about=āā)? About the same difference as there is between āself referenceā and āself reflectionā? ;-)
]]>Iād been contemplating what a namespace-aware XML parser for RSS 1.0 would look like, and how it would work in relation to the RSS modules. (Of course for Perl programmers, for example, thereās the XML::RSS parser on CPAN, which is namespace aware but relies on the RSS 1.0 namespace being the default namespace ā in other words, if you specify a prefix for the RSS 1.0 namespace, say, ārss10ā³, rather than have it as the default namespace in the document, and prefix the RSS 1.0 elements with this prefix (ārss10:channelā, ārss10:itemā, etc) then XML::RSS isnāt going to like it. But I digressā¦)
While namespace-aware XML parsing is indeed important, and namespaces are fundamental to RDF, the importance of handling namespaces correctly when parsing had clouded a question that I knew existed but hadnāt all the right words in the sentence, until now.
āWhat is the significance of RDF in RSS?ā Actually, thatās not quite right.
While Iāve been looking at the RDF in RSS, Iāve been concentrating on the bits that ālook likeā RDF ā the stuff that I highlighted in bold in the example RSS (rdf:about, rdf:Seq, and so on). But itās not as if there are some ābitsā of RDF in a format thatās RSS ā¦ the format RSS 1.0 is an RDF application. In other words, all of RSS 1.0 is RDF. The fact that itās very similar to non-RDF RSS formats like 0.91 is of course an intended advantage. And the fact that the ātransportable formā that RDF takes is XML (RDF can be expressed in node/arc diagrams, or other forms such as Notation 3, or āN3ā²) also makes it nicely ācompatibleā.
āSo what?ā, I hear you ask.
Well, Iāve been wondering how complicated an XML parser (yes, a namespace aware one, but thatās not significant here) would have to get to support the plethora of RSS 1.0 modules available now and in the future. To be more specific, letās take an example. Consider the creator property (element) from the Dublin Core (dc) module. The property is normally used by specifying a literal (a string) as its value, thus:
<dc:creator>DJ Adams</dc:creator>
But what about rich content model usage of properties? Consider the use of this property in the discussions of how to splice FOAF with RSS. Dealing with a new element from a defined namespace, where the usage is of the open tag ā literal value ā close tag variety, is not that difficult when parsing based on XML events. But what about this, which is based on one of the suggestions from Dan Brickley in the discussion and further discussed on the rdfweb-dev list:
<dc:creator> <foaf:Person> <foaf:Name>DJ Adams</foaf:Name> <foaf:seeAlso rdf:resource='...' /> ... </foaf:Person> </dc:creator>
Suddenly having to parsing this, as opposed to the simple āliteral valueā example, is a whole new ballgame in state management (āwhere the hell am I now in this XML document and what do I do with these tags?ā), and at least for this author, writing an XML parser to cope with all such data eventualities would be rather difficult in the context of XML-event based parsing.
But thatās just it. Considering an XML parser is missing the point. An RDF parser is more appropriate here. Looking at the structure of RSS 1.0 and the modules available for it from an RDF point of view suddenly made things clear for me. With RDF, the striped nature of the information encoded in XML is neatly parsed, regardless of difficult-to-predict hierarchical complexity, and translated into a flat set of triples (subject, predicate, object) that you can then interrogate. What you do with that information is then up to you.
There are many RDF tools, including parsers, listed on Dave Beckettās RDF resource site. One of them is Redland, his own RDF toolset. Just before I bring this post to a conclusion, letās have a look at what the RDF parser in Redland produces for the two creator examples earlier.
Simple literal value example gives:
{[//qmacro.org/about], [http://purl.org/dc/1.1/elements/creator], "DJ Adams"}
In other words:
Ā Ā Ā /--------Ā creatorĀ Ā +----------+ Ā Ā Ā | qmacro |----------->| DJ Adams | Ā Ā Ā --------/Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā +----------+
Complex FOAF element structure example gives:
{[//qmacro.org/about], [http://dublincore.com/creator], (genid1)} {(genid1), [http://www.w3.org/1999/02/22-rdf-syntax-ns#type], [http://foaf.com/Person]} {(genid1), [http://foaf.com/name], "DJ Adams"} {(genid1), [http://www.w3.org/2000/01/rdf-schema#seeAlso], [http://www.pipetree.com/~dj/foaf.rdf]}
In other words:
Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā typeĀ Ā Ā Ā Ā /-------- Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā +-------------->| Person | Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā |Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā --------/ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā | Ā Ā Ā /--------Ā creatorĀ Ā /----------Ā Ā Ā nameĀ Ā Ā Ā Ā +----------+ Ā Ā Ā | qmacro |----------->|Ā genid1Ā |------------->| DJ Adams | Ā Ā Ā --------/Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā ----------/Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā +----------+ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā | Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā |Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā /---------- Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā +-------------->| foaf.rdf | Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā seeAlsoĀ Ā Ā ----------/
(Whee! ASCII art RDF diagrams :-)
So what conclusion is there to draw from this bit of rambling? For me, itās the emphasis on RDF, rather than XML, of RSS (and in fact the subtle relationships between those three things) that is significant in itself, especially when one considers the journey to data richness that seems to demand complex (and tricky-to-parse) XML structures. And whatās more, itās not specifically RSS that wins here. Itās any RDF application.
]]><rdf:RDF ... > <rdf:Description ... > ... </rdf:Description> <rdf:Description ... > ... </rdf:Description> ... </rdf:RDF>
How come then, that instances of two of the more well-known RDF applications, RSS and FOAF, donāt seem to reflect this format? Following the root rdf:RDF node and the declarations of the namespaces, we have, respectively:
<channel rdf:about="//qmacro.org/about"> <title>DJ's Weblog</title> ... </channel>
and
<foaf:Person rdf:ID="qmacro"> <foaf:mbox rdf:resource="mailto:dj.adams@pobox.com"/> ... </foaf:Person>
What, no rdf:Description? Letās have a look at whatās happening here. In the RSS example, we have channel ā or in its fully qualified form http://purl.org/rss/1.0/channel ā a class, of which //qmacro.org/about is declared as an instance with the rdf:about attribute.
The RDF subject-predicate-object triple looks like this:
//qmacro.org/about rdf:type http://purl.org/rss/1.0/channel
or in other words āthe URI (which is about to be described) is a channelā.
Because RDF is about is declaring and describing resources, it becomes clear that this sort of statement (technically the rdf:type triple, above) is very common. And what we saw in the RSS snippet above was the special RDF/XML construction that may be used to express such statements. If we didnāt have this special construction, weād have to write:
<rdf:Description rdf:about="//qmacro.org/about"> <rdf:type rdf:resource="http://purl.org/rss/1.0/channel" /> <title>DJ's Weblog</title> ... </rdf:Description>
which is a tad long winded. Similarly, the long winded equivalent for the FOAF example would look like this:
<rdf:Description rdf:ID="qmacro"> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person" /> <foaf:mbox rdf:resource="mailto:dj.adams@pobox.com"/> ... </rdf:Description>
So there you have it. The rdf:Description isnāt there because a special construction is being used in both examples. Many thanks to Jon Hanna for turning the light bulb on in the first place.
]]>Open thinking about deep-linking
Tim Brayās strawman defence of the principle that ādeep linkingā on the web isnāt illegal. Itās a wonderfully calm and simple aspirin for the anger and frustration that builds up inside when one reads about silly legal action about ādeep-linkingā.
RDF, define thyself
In Sean B. Palmerās document The Semantic Web: An Introduction (highly recommended!), RDF Schema is introduced, using (amongst other things) this snippet of RDF (read ārdf:typeā as āis aā):
rdfs:Resource rdf:type rdfs:Class . rdfs:Class rdf:type rdfs:Class . rdf:Property rdf:type rdfs:Class . rdf:type rdf:type rdf:Property .
I donāt know about you, but I had to go and have a sit down to consider the implications after reading that.
Using namespaces in code
Last week on #rss-dev, Ken MacLeod pointed to a post by Dan Connolly regarding namespaces. Ken said:
A very key point (I think) drawn out in this article is that namespaces are used only to derive a (URI+localname) pair ā namespaces should never be considered seperate from the element name they specify. ā¦ A namespace and localname make a single item of data, distinct from any other combination of namespace and localname.
Libraries and applications (tools) should not try to store a namespace as one āobjectā and try to link all of the names as āchildrenā of those objects. So, if youāre working in a language thatās string-happy, like Tcl or Perl, the first thing you should do is take the namespace and element name and put them together and use them like that from then on, ā{URI}LocalNameā works well in Perl, for example.
Sounds obvious when you grok it, but (for me at least) it was a refreshing way to look at the whole issue of namespaces and how theyāre represented in XML and used in deserialised data structures.
]]>While the advent of XML scripting sounds fascinating, Iāve also been wondering about RDF enabling us to āgracefully integrate with the world of objectsā and enhance the āself-describing nature of XMLā. Yes, itās my current area of interest (read: Iām vacuuming as much information as I can about it right now), and this by itself is likely to taint my vision somewhat. But reading what was quoted from Adam immediately made me think of some of RDFās features (or should I say ānatureā, I guess Iām not trying to sell it):
Now itās clear that XML is not RDF. Thereās the bootstrapping issue with RDF applications of which weāre all aware. Thereās no magic wand, but there are ways (such as transformations to wring out RDF essence from āflatā XML) to get going. And in the context where REST, web services, business data, and the focus on resources (URIs) intersect, RDF ā as a technology for describing, sharing and linking business data ā seems too significant to ignore.
Going back to Adamās quote that sparked this post, I am curious about the ānative supportā of XML as a data type; my limited imagination cannot see how that might happen without some sort of serialization/deserialisation (will a term like āserdesā be this decadeās equivalent of āmodemā?). I am ready and willing to be enlightened :-) The great thing about RDF is that there is already a bounty of software (storage mechanisms, model and query tools, serializers and deserialisers) that can work with RDF in many existing programming languages.
Anyway, plenty to ponder. Life is good.
]]><description>
, I decided to take the plunge and use the draft part of RSS 1.0ās mod_content module, namely the content:encoded property, to hold the entity-encoded weblog item content. (The description element itself in core 1.0 is optional, and although Iām omitting it for now, Iām still uneasy about it ā ideally Iāll have a text-only abstract and be a good RSS citizen).
This is something that Jon, Sam and others have done already. While Timothy Appnel asks a good question, Iāll address it here at a later stage as Blosxom entity-encodes my HTML for me (i.e. thereās not much point trying to XSL-Transform it back).
So I have modified the RSS 1.0 feed for this site to use content:encoded with a stylesheet slightly modified from last time.
]]>Luckily, Eric van der Vlist has some XSLT stylesheets over at 4XT to do exactly that. This is the perfect opportunity for me to (a) learn more about XSLT by studying his stylesheets, and (b) to reflect upon the loosely connected nature of the web by employing the W3Cās XSLT Service and pointing directly to Ericās 0.91-to-1.0 stylesheet and my RSS 0.91 source, in a URI recipe similar to the earlier sidebar experiment.
This link is the URI that will automagically return an RSS 1.0 of my weblog. Hurrah! However, so as not to abuse the transformation service, Iām cacheing the result and making my RSS 1.0 feed āstaticā, like this (split up a bit for easier reading):
/usr/bin/wget -qO /tmp/qmacro.rss10
'http://www.w3.org/2000/06/webdata/xslt
?xslfile=http%3A%2F%2F4xt.org%2Fdownloads%2Frss%2Frss091-to-10.xsl
&xmlfile=http%3A%2F%2Fwww.pipetree.com%2Fqmacro%2Fxml&transform=Submit'
&& mv /tmp/qmacro.rss10 ~dj/public_html/
This is another example of the flexible nature of the shell (my favourite IDE) and programs designed and written for it. The wonderful wget program returns true if the retrieval of a resource was successful, otherwise false. I can then use the && to only overwrite the current static rendering if weāve successfully got a fresh transform result.
I arrange for this incantation to be made once an hour, and can announce that my RSS 1.0 feed is available here: http://www.pipetree.com/~dj/qmacro.rss10
]]>The point of RDF is to be able to describe resources. Resource Description Framework. So far so good. But what are resources? Theyāre things that we can point to on the web ā things with URIs (REST axioms, anyone?). With RDF, we can make assertions, state facts, about things. These assertions are always in the form of
'this thing' has 'this property' with 'this value'.
These assertions are often expressed as having the form āsubject-predicate-objectā and are referred to as ātriplesā. RDF exists independently of XML, but what I (and lots of other people) recognise RDF as is its XML incarnation. Hereās a simple example:
<rdf:Description rdf:about='//qmacro.org/about'>
<dc:title>DJ's Weblog</dc:title>
</rdf:Description>
This makes the assertion that
the resource at //qmacro.org/about has a title (as defined in the Dublin Core) with the value āDJās Weblogā.
Whatās obvious is that subjects are URIs. Itās also easy to realise that objects can be URIs too ā instead of having a Literal (āDJās Weblogā) as in the example above, you can have another resource (a URI), for example:
<foaf:Person rdf:ID="qmacro">
<foaf:depiction rdf:resource="http://qmacro.org/~dj/dj.png"/>
</foaf:Person>
Here, the object, the value of the foaf:depiction property, is a URI (http://qmacro.org/~dj/dj.png) pointed to directly with the rdf:resource attribute.
But whatās really mindblowingly meta is that the predicate parts of assertion triples, the properties, are resources, addressable by URIs, too. Yikes! this means that RDF can be used to describe ā¦ RDF. In case youāre wondering, the properties (dc:title, foaf:depiction) donāt look like URIs, but they are URIs in disguise ā the URI for each property is made up from the namespace qualifying the XML element name, and the element name fragment on the end of that. So for example, the dc namespace http://purl.org/dc/elements/1.1/, plus the element name title, gives:
Anyway, the point of RDF here is to be able to make connections between things on the web. To define, or describe, relations between things; to add richness to the data out there ā to declare data about the data. If we, or our machines, can understand things about the data weāre throwing around, the world will be a better place for it. And to all those meta-data agnostics out there, ask yourself this ā where would the database world be without data dictionaries?
So, what about these triples that exist in RSS 1.0? Theyāre just to add a layer of richness, a seam to be mined by RDF-aware tools. Letās have a look at a simple RSS 1.0 file. Iāve highlighted the RDF bits (slightly cut to fit):
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/">
<channel rdf:about="//qmacro.org/about/xml">
<title>DJ's Weblog</title>
<link>//qmacro.org/about</link>
<description>Reserving the right to be wrong</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://www...#tech/moz-tab-bookmark"/>
<rdf:li rdf:resource="http://www...#tech/google-idempotent" />
</rdf:Seq>
</items>
</channel>
<item rdf:about="http://www...#tech/moz-tab-bookmark">
<title>Mozilla "Bookmark This Group of Tabs"</title>
<link>http://www...#tech/moz-tab-bookmark</link>
<description> I was just reading some background stuff ...
</description>
</item>
...
</rdf:RDF>
Hereās what we have, RDF-wise:
And what do these RDF things do? First, each resource ā the RSS channel, or the Weblog it represents, and the actual items ā are identified as subjects of assertions, using the rdf:about attributes. You could say that theyāre the āsubjects of Descriptions of themā. Each has a unique URI. Then, an assertion of the following nature is made about the channel:
The channel //qmacro.org/about/xml contains an ordered sequence of things, namely http://wwwā¦#tech/moz-tab-bookmark and http://wwwā¦#tech/google-idempotent.
If the RSS file were to have an image, it would occur as in other RSS versions (i.e. as an element peer of the
<image rdf:resource="..." />
element pointing to the same URI as the
The channel //qmacro.org/about/xml has an *image *, namely (the imageās URI).
And so on.
In other words, the RDF in RSS is there to identify resources (the nodes) and to describe properties of or relationships between them (the arcs). The RDF content of RSS is not large. I think some people might intermingle RDF and namespace content and think āooh, thereās a lot of RDF in RSSā. Sure, namespaces are fundamental to RDF, but exist (both here in RSS and elsewhere) independent of it (although if you use namespaces such as the Dublin Core in RDF-enhanced RSS, then youāre effectively, and at no extra cost, adding to the data web with the triples that come into being because of how RDF, namespaces, and XML wonderfully work together).
So, there you have it. Just a bit of a brain dump of what Iāve been learning over the past couple of days. Now that I understand whatās going on, I for one would be very disappointed to see RDF go away from RSS. Although there are signs that this may not be the case after all. But thatās another story.
]]>Now, if I can get Mozilla to run on my VT320 ā¦
]]>I had about 5 or 6 tabbed pages open with content relating to the discussion, and lo and behold, my new browser of choice, Mozilla (not least because I can now have a consistent experience on Linux and MS-Windows) allowed me to āBookmark This Group of Tabsā all at once, and give the collection a nice little title.
Neat. Itās little things like this that make for a pleasant experience. Now if only I could move from tab to tab with the keyboard instead of the mouse ā¦
]]>Point #1: āGETs must not have side effectsā is perhaps RESTās most cherished axiom
If I had to pick one as being the most cherished, Iād go for the one that says that anything thatās important is a first class URI citizen (i.e. addressable by a URI). The āno side effectsā axiom appears to be ājustā a natural follow on from the presentation of how the HTTP verbs are supposed to be understood and used.
Point #2: The 1001st call to Google is different, and [so] the [GET] query is not idempotent
In the SOAP context, a SOAP Fault will be returned by Google if you exceed your limit of 1000 calls in a day. Returning a SOAP Fault within the context of an HTTP 200 OK status is one thing. But percolating this response up to a REST (i.e. HTTP) context would imply returning, say, an HTTP 403 FORBIDDEN, with a body explaining why. This is a valid response to a GET.Having different results, different status codes, returned on a GET query doesnāt necessarily imply any side effects. Indeed, in our beloved canonical stock-quote example, we donāt even need to regard the HTTP status codes to see that results can be different on the same GET query (the stock market would be a very dull place if they werenāt). And what about Google itself? The same search query one day will not necessarily return the same results the next day. Different query results, no implied side-effects.
Point #3: So, what do you do?
Nothing different. Through REST-tinted spectacles, the 1001th GET receives a 403, and you act accordingly. No lives have been lost, no state has been changed. Potentatus idem manet. As the saying goes. Well. It does now.
Of course, these are just my thoughts. Apologies to Sam if Iāve misunderstood his points, and to Mark if Iāve potentially muddied the waters.
P.S. maybe I should have used āpotestasāā¦
P.P.S. Iām a grey, not a black-and-white, person
]]>http://www.pipetree.com/service/xslt
?
xmlfile=*http://url/of/rss.feed*
&
xslfile=http://www.pipetree.com/~dj/rss.xsl
&
cachelife=30
The ācachelifeā parameter says āgive me the cached version as long as itās no more than N (30, here) minutes old ā¦ otherwise pull the RSS and transform it for me afresh, babyā. (Itās all explained briefly on a little homepage, which you get if you donāt specify an xslfile or xmlfile parameter.)
The existing sidebar button will continue to work fine, in that a default of 60 (minutes) is assumed if no ācachelifeā parameter is specified.
]]>Fozbaca recently pointed to something similar, which reminded me about the whole thing. Iāve just downloaded Mozilla 1.1, and decided to revisit the area. Things have changed ā you can now plonk straight HTML into the sidebar, rather than have to use XUL. Mmmm.
So, Iāve had a bit of fun glueing ideas I read about from Mark and Jon. What Iāve ended up with is a Mozilla toolbar button that you can click while viewing a weblog that points to its own RSS feed. The buttonās link is to Javascript, adapted from Markās auto-subscribe bookmarklet. On discovering an RSS feed (and the title of the blog page), it then constructs an XSLT pipeline URL that Jon demonstrated last month. The URL looks like this (split up for easy reading):
http://www.pipetree.com/service/xslt
?
xmlfile=*http://url/of/rss.feed*
&
xslfile=http://www.pipetree.com/~dj/rss.xsl
The /service/xslt on pipetree is something very similar to the W3C XSLT Service that Jon used. I wrote my own for various reasons. Itās a lot simpler, and probably a lot dafter. The XSLT stylesheet specified is a very simple one which points to some even simpler CSS to make the RSS-rendered-into-HTML ā¦ small enough to fit in Mozillaās sidebar, into which it goes with the call to sidebar.addPanel() at the end of the Javascript where all this pipelining started out.
Itās not that efficient, probably not that useful in the long run, but is certainly fun and allows me to turn my Mozilla into a sort of RSS newsreader. If you want to have a go, you can drag the Javascript link from here. Feel free to improve things!
]]>It was as much the opportunities to meet and chat with other like-minded individuals, exchange thoughts and ideas, and generally make new friends, as it was the talks and tutorials that I (and probably many other attendees) valued there.
Needless to say, I also grabbed the chance to take my annual fill of U.S. food ā chilli dogs, cheese fries, burritos, and cinnamon and raisin bagels. Yum.
]]>About to package the thing up to take it back, I passed the house server running Linux. What the heck, I thought, and plugged it in the back. āOoh, helloā, said the kernel. I mounted the emulated SCSI device, and grabbed the pictures of the Smart Media card. Easy as that.
The tables have turned. In times past, it used to be that peripherals Just Worked with Windows (mostly because the vendors targeted the drivers to that platform). Not any more.
Iām a happy Linux user.
]]>You can find out about the book at Amazon so I wonāt bother with the plot. Itās a wonderful study in far future tech ā the ships, minds, and drones ā which the characters, the author, and eventually you, the reader, take for granted (the tech doesnāt obscure the plot or the interplay of characters, but itās wondrous all the same), interplay between human(oid) and artificial intelligence, and the tangents of differing civilizations.
But what struck me most this time around was the way that I, the reader, naturally associated myself with the Culture (the civilization to which the central characters belong) ā mostly, perhaps, because the Culture was the basis from which the plot stems, and regarded the Empire (the civilization that begat the game Azad) as the āaliensā. But the more one progressed through this novel it was clear, almost politically clear, that in fact the unruly, violent, and relatively primitive Empire civilization ā¦ was ours.
A great read.
I wonder how I could reuse this blog item as a review item in the bookās review section on Amazon? Hmm, how about an RSS 1.0 module and Amazon binding in support for that?
]]>Iām trying to understand more about REST. To that end, Iāve just written a little RESTful interface to RT (Request Tracker), in the form of an Apache mod_perl handler, so that I can create new and correspond on existing tickets via a simple interface that I can call from my other apps.
Creating a ticket:
POST /ticket
(queue, subject, email, and initial ticket query supplied in body)
...
201 Created
Location: /ticket/42
Corresponding on a ticket:
PUT /ticket/42
(correspondence supplied in body, will be appended to the ticket history)
...
200 OK
(Hmm, perhaps that should that be PUT to /ticket/42/history
, returning a 201 with a unique URI for that particular piece of correspondence, e.g. /ticket/42/history/20020715115442
).
Getting info on a ticket:
GET /ticket/42
or
GET /ticket/42/basics
or
GET /ticket/42/history
...
200 OK
(ticket info)
Iām glad I had my copy of the excellent Writing Apache Modules with Perl and C close to hand, to remind me of things like $r->custom_response()
and Apache::Constants->export(qw(HTTP_CREATED))
.
FOAF is a project under the RDFWeb umbrella, and is an effort to build a vocabulary for expressing relationships between and facts about things on the interweb. As with REST, a key axiom (hrm, is that verging on the tautalogical?) is that URIs are very important, in uniquely identifying resources. Thereās a good introductory article by Edd Dumbill on FOAF.
Iāve had a first hash at a FOAF file to describe me, and itās here. In the growing fury of social network construction and subsequent mining, this could be interesting. Hey, and it doesnāt have to stop thereā¦ Under the influence of a small tumbler of Glenmorangie, I can half-imagine a situation where we have compound business documents and partners in an ERP system like SAPās R/3 exposed and linked (through the philosophical transparency of REST) to one another via their URIs, with those link relationships described in a FOAFy (RDFlike?) way.
Hrmmmmā¦
]]>I just read a couple of Paul Prescodās papers: A Web-Centric Approach to State Transition and Reinventing Email using REST. Theyāre both interesting for many reasons, not least because they show some of the other 80% of HTTP in action.
Of course, some people might point out that the 80/20 āimbalanceā will remain so while protocols (mechanisms, encodings?) like SOAP encapsulate much of what HTTP has to offer. Hmm, itās very difficult to write about REST and SOAP in non-loaded terms :-)
Anyway, if nothing else, in pondering the RESTian philosophy, Iāve been re-acquainted with the other 80% of HTTP, and those things like URIs that are closely linked.
]]>But the most intriguing thing was his Subscriber Interface for weblogs.com, the mechanism on which the blogToaster is based. Itās a SOAP-based frontend to the weblogs.com āRecently Changed Weblogsā information. You tell the interface what weblog URLs youāre interested in and it gives you SOAPy pings at the URL you specified whenever theyāre updated, taking care of the nasty polling business for you (and for everyone else, which is the whole point).
Inspired by this generous infrastructural act, I put together an experimental bit of code which reflects this mechanism out into Jabber plasma. Itās a pubsub concentrator that sits in front of Simonās Subscriber Interface and allows any app that can send and receive simple Jabber packets to request and receive weblogs.com-based update pings via this subscriber interface, without all the tedious mucking about in HTTP and SOAP protocols [1] (with apologies to Douglas Adams).
The idea is that in the same way that the Jabber extensions for Danny OāBrienās Panopticon gave the Panopticon server some breathing space by effectively diffusing the data to Jabber entities via a conference room, so this new mechanism abstracts the Subscriber Interface out and allows many subscribers to share one subscription connection. Publish/Subscribe. One publisher, many subscribers. The publisher, in this case the Subscriber Interface, only has to send out one SOAPy ping per updated weblog URL to reach potentially many notification recipients (subscribers).
So rather than reproduce a blogToaster-like mechanism, I thought Iād have a go at putting together a mini-infrastructure on top of which lots of different blogToaster-like mechanisms could be built.
The mechanism is running at JID āweblogs.gnu.mine.nuā, and the packets are based on the Jabber PubSub JEP. Itās still alpha, and likely to fall over if you look at it the wrong way.
Hereās an example of how it works. You send a āsubscribeā packet, saying you want to be notified when DJās Weblog is updated:
SEND:
<iq type='set' to='weblogs.gnu.mine.nu'>
<query xmlns='pipetree:iq:pubsub'>
<subscribe to='//qmacro.org/about'/>
</query>
</iq>
Then, whenever the weblog specified is updated, you get a packet pushed to you like this:
RECV:
<iq type='set' from='weblogs.gnu.mine.nu' to='user@host/resource'>
<query xmlns='pipetree:iq:pubsub'>
<publish from='//qmacro.org/about'>
<url>//qmacro.org/about</url>
<name>DJ's Weblog</name>
<timestamp>2002-07-03T21:35:51Z</timestamp>
</publish>
</query>
</iq>
The information in the name, url and timestamp tags (in the publish IQ) is taken directly from the weblog tag in the SOAP-enveloped callback message described at the bottom of the Subscriber Interface description page.
For now, as a bonus (or an immoral twisting of the Jabber pubsub packet philosophy, depending on how you look at it ;-) Iāve set up the mechanism to send you not only a publish IQ as shown above, but also a simple message packet with the same information, so that you can use your regular Jabber client to āprocessā (read: see) the pings. So if youāre feeling brave, break out your Jabber debug app and send a few pubsub packets to weblogs.gnu.mine.nu. If youāre not feeling so brave, you can wait until tomorrow ā Iāve got a a few helper example apps that will hopefully make things clearer. In either case, remember this: if it works, it works because of the coolness of what Dave W built, the coolness of what Simon built, and the coolness that is ānet based collaboration and open standards. Time for bed now.
[1] Ooh, talking of HTTP and SOAP protocols, I just read an interesting XML-SIG post by Paul Prescod which made some valid (but also nicely philosophical, IMHO) points towards the end of the mail regarding whether SOAP is actually a protocol (as opposed to, say, an encoding), and how much, despite its āindependenceā of transport, it depends upon its binding to HTTP, as much as any protocol depends on its binding to a lower-level transport. But I digress..
]]>Itās been about a year now since I originally started researching and writing it; Iād taken a few months off work to devote time to it. It sure was a fun, but intense, time. I can appreciate much more now just how much work goes into writing a technical title, and I re-read books on my bookshelves with awe anew.
]]>Iāve updated my weblog in these two areas; each specific page ā be it a year, or a year/month, or a year/month/day specification, now includes the appropriate date in the page title. Iāve added a <link rel=āhomeā ā¦ >
link too, which Iād thought fleetingly about when I was looking at other attributes of the tag but had forgotten until being reminded by Mark.
To achieve this sort of thing, you can just add the extra bits in the head.html template. Iāve taken Blosxomās default/builtin head template (which is inside Blosxom itself) and amended it appropriately ā you can have see what it looks like here ā to use it, just place the file in same directory that you keep your .txt files.
In actual fact, I use Blosxom more of an engine that generates the blog postings for me, which I then include, via SSI, in a template that holds together lots of elements, such as the calendar, and the various lists. So I donāt use the head.html template file. Instead, I wrote a tiny script, blostitle, which outputs the appropriate date string for appending to the title. I include the <link rel=āhomeā ā¦ >
link manually in the SSI. Altogether, the <head/>
part looks like this:
...
<title> DJ's Weblog - <!--#include virtual="/~dj/cgi-bin/blostitle" --> </title>
<!--#include file="style.incl" -->
<link rel="home" title="Home" href="/qmacro" />
...
]]>
Here are the details of how it works and how to set it up.
]]>Last week, I was alerted to Blogdex by Ben (through the funny little Metalinker Javascript-induced ā[b]ā links on his RSS weblog pages). Itās an interesting project that trawls weblogs and compiles link information (I donāt know how wide it trawls, so your URL might not be in there. YMMV).
I though Iād write a script to use Blogdexās information and perhaps complement Markās blogrollfinder.py script and Daveās weblogNeighborhood tool. My script, bdexp, compiles a āneighbourhoodā view of a weblog URL by following the ālinks toā information for that URL in Blogdex. (This is the ābrowseSourceā Blogdex URL). You give it a weblog URL, and an optional depth (how far to descend, default 2 levels, maximum 4), and it goes away, pulls and analyses the information, and gives you a rank list of results. Iāve weighted the scores ā the further ādownā a URL appears, the fewer points it gets.
Iāve āCGIādā (ugh) my script so you can have a go too. Call it like this: http://www.pipetree.com/~dj/cgi-bin/bdexp?url=//qmacro.org/about/ and have patience while the script descends Blogdex information and does its stuff. Iāve deliberately slowed the script down so it doesnāt hammer Blogdexās servers. In fact, results are cached too, for added politeness :->
But wait ā thereās more! So that the information made available through this script might be used to correlate, augment, and otherwise confuse neighbourhood information determined from other sources and methods like Markās and Daveās, you can get XML output, rather than HTML. Just add &xml=1 to the URL like this: http://www.pipetree.com/~dj/cgi-bin/bdexp?url=//qmacro.org/about/&xml=1 and youāll get a very simple XML format containing the same data. This makes it dead easy to just pull in this Blogdex-powered neighbourhood information into your own tool. Well, thatās the theory anyway :-) (You can specify the depth with &depth=N too).
Iāve implemented a little lock mechanism so that only N users can use the script at once; Iām not sure how my server (itās only a poor old Celeron), or Blogdex, will take to massive parallelism. Ok, this is wishful thinking, of course ā¦ hardly anyone reads this anywayā¦ ;-)
Thank you Cameron at Blogdex for compiling this linking info.
Usual disclaimer applies ā my code is just hacked together.
]]>So in the same way that Blosxom users can tune the format of the stories by maintaining a story.html file in their blog directory, now I can tune the format of items that Blagg spits out, by maintaining a blaggitem.html file in the same directory.
Of course, this is an unofficial mod to Blagg, but if youāre interested, itās here, and for those with a ādiffā bent, the differences to the original 0+4i version are highlighted here (the diff also highlights the passing of the extra $i_fn parm in the call to blaggplug::post(), as well as config variable changes peculiar to me).
]]>Furthermore, regarding the other tag I have in this document, pointing to the RSS feed list (for which Iād specified a rel value of āfeedsā), Iāve had a quick shufty at the allowed values for the rel attribute of the tag. While āfeedsā isnāt explicitly there, and āhelpā is the closest fit from the choices given, Iāve nevertheless decided to keep the link type āfeedsā, and qualify that definition with a meta data profile. Just to see where this leads.
This is what I interpreted from the W3C HTML specs:
āAuthors may wish to define additional link types not described in this specification. If they do so, they should use a profile to cite the conventions used to define the link types.ā
āThis specification does not define formats for profiles.ā
(If anyone can show me a pointer to where profile formats are described, that would be great!) 5. So Iāve decided to go for a simple URI, āqmacro:weblogā, for now, on the basis of this statement:
āUser agents may be able to recognize the name (without actually retrieving the profile) and perform some activity based on known conventions for that profile.ā
This is what it now looks like:
... ...I might be barking totally up the wrong tree, but thatās the price of fun experimentation. I think the only thing that needs to be different is the actual URI ā if we can agree on a standard global name, so much the better.
]]>So, alongside the tag I mentioned yesterday, Iāve added a further tag thus:
It points to the RSS file of RSS feeds I talked about yesterday, and point to with the [[meta]](file:///%7Edj/rss.rss) link in the My Feeds list on the right.
This sort of thing should make scripts like Markās blogrollfinder.py a lot simpler, if we can somehow standardise this too.
]]>So what about RSS? What makes it so appealing? Well, a big reason is of course its position as a foundation of stability (despite its own temporary instability, format-wise), in the burgeoning, nay, blossoming, world of weblogs, syndication, and knowledge sharing. But Iāve been wondering if itās more than that. Itās a simple format. But a powerful one. The RSS skeleton reflects an information model that can be found everywhere: header and body. You could say header and items. Items. Positions. Whatās the fundamental structure of pretty much every piece of (business) transactional data in SAP (and other ERM) systems? A document. A document, which has a header, and items, or āpositionsā. Sales orders, invoices, purchase requisitionsā¦ the list goes on. Hmmm. Could it be that the RSS skeleton is so popular and flexible because itās one of the netspaceās protean formats, and easy to grok?
RSS 1.0 celebrates that flexibility with itās modular approach.
Iām celebrating that with a āmetaā RSS feed, available from the [meta] link in the My Feeds list on the right hand side of this page. Itās a list of all the feeds Iām subscribed to right now, in an RSS 1.0 format. Currently, Iām just using core tags, but it might be a better idea to create a simple module to enable an explicit statement of what the data is. (I know thereās OCS too, but hey, itās Friday).
]]>I do remember sitting in on an RSS BOF at last yearās OSCON where we discussed the idea of having an index.rss at the websiteās root, rather like the robots.txt file. This based pointing is a nicer approach, as thereās an explicit relationship between the RSS XML and what it describes.
Whatās more, Mark has a nifty bit of Javascript that grabs any RSS URL that it finds in this new home, and bungs it at Radio Userlandās localhost-based webserver invoking a subscribe on that RSS feed. Very nice. I donāt run RU, nor use IE much, but nevertheless this would work, even with Blosxom, because Iām running bladder :-)
]]>Ouch.
Ahem.
]]>wget.pl
ā a tiny wrapper around the wget command so that you can be more polite when retrieving HTTP based information ā in particular RSS feeds.
The idea was sparked by Simon's post about using HTTP 1.1's ETag and If-None-Match headers. I wanted to write as small and minimal a script as possible, and rely on as little as possible (hence the cramped code style), in honour of Blosxom, and of course Blagg, the RSS aggregator, for which the script was designed. You should be able to drop this script reference into Blagg by specifying the RSS retrieval program like this:
my $get_prog = '/path/to/wget.pl';
Donāt forget, the ETag advantage is only to be had from static files served by HTTP. Information generated on the fly, such as that from CGI scripts (such as Blosxom) arenāt given ETags.
Update 06/06/2012
Itās now just over 10 years since I originally wrote this post, and in relation to a great post on REST by Sascha Wenninger over on the SAP Community Network,Ā Iāve just re-found the script ā thanks to a comment on Mark Bakerās blog that pointed to wget.pl being part of a Ruby RDF test package. Thanks mrG, whoever you are!
Hereās the script in its rude entirety for your viewing pleasure.
#!/usr/bin/perl -w
# ETag-aware wget
# Uses wget to more politely retrieve HTTP based information
# DJ Adams
# Version 0+1b
# wget --header='If-None-Match: "3ea6d375;3e2eee38"' http://www.w3.org/
# Changes
# 0+1b 2003-02-03 dja added User-Agent string to wget call
# 0+1 original version
use strict;
my $cachedir = '/tmp/etagcache'; # change this if you want
my $etagfile = "$cachedir/".unpack("H*", $ARGV[0]);
my $etag = `cat $etagfile 2>/dev/null`;
$etag =~ s/\\"/"/g;
$etag =~ s/^ETag: (.*?)\n$/$1/ and $etag = qq[--header='If-None-Match: $etag'];
my $com="wget -U 'blagg/0+4i+ (wget.pl/0+1b)' --timeout=60 -s --quiet $etag -O - $ARGV[0]";
print "Running: $com";
my ($headers, $body) = split(/\n\n/, `wget -U 'blagg/0+4i+ (wget.pl/0+1b)' --timeout=60 -s --quiet $etag -O - $ARGV[0]`, 2);
print "Got headers: $headers\n\n";
if (defined $body) {
($etag) = $headers =~ /^(ETag:.*?)$/m;
print "Return value etag: $etag";
defined $etag and $etag =~ s/\"/\\\"/g, `echo '$etag' > $etagfile`;
print "\n==========\n";
print $body;
}
else {
print "Cached.";
}
]]>
Actually, talking about HTTP headers with basic and digest authentication, hereās something else Iāve been wondering. Simon Fell rightly suggests using a more polite and sensitive way to grab RSS sources, by use of the Etag and If-Not-Match headers. Very sensible. But what about the If-Modified-Since header?
Hereās one advantage that email has over HTTP. A built-in queueing system. Ok, the actual queueing system is made most visible by use of email clients, where you see mails in a queue, ready to read or process. But this is just a mask over the flat stack of emails that you can pop with, er, the POP protocol.
āYesbutā, as a friend used to say in meetings and discussions. Hereās something Iāve been pondering too. Last week I downloaded and installed the fabulous RT, (āRequest Trackerā) ā a ticketing system written in Perl. Itās very flexible and extensible. RT allows tickets to be managed in queues. It also allows tickets to be created (or corresponded upon) through different interfaces ā via a web interface, via email, or via the command line. Any incoming transaction is inserted into a queue (if itās a new ticket) or appended to an existing queue entry (if itās correspondence on an existing ticket). I wonder if I can build a small front end to accept HTTP-based business calls and stick them in an RT queue? Of course, I also wonder whether that would be useful, but if nothing else, it would be stimulating.
]]>While the current rush of implementations use HTTP as the transport (witness HTTP as the most common transport for SOAP RPC, or HTTP as the designated transport in the XML-RPC specification), there are apparent pitfalls.
Firstly, look at the steam generated from the SOAP-through-firewalls debate. (On the one hand they have a point, on the other hand, itās not necessarily up to a firewall to vet at the application level ā look at EDI for example). Secondly, some people are of the opinion that HTTP needs to be replaced, in the light of its apparent weaknesses for the things that people want to use it for these days. If this happens, will we change the āWeb Servicesā name?
Secondly, focusing on HTTP (and therefore the āWebā in āWeb Servicesā) does a, err, disservice to other protocols careening around the ānet. What about the venerable SMTP, for example? There has been valid comments made about the applicability of HTTP in āincreasingly asynchronousā transactions. Fire off a request for some information, say, a quotation, and the response may take days to come back. Is this legal, moral, sensible, in HTTP?. Ok, you could frame the asynchronicity in HTTP by using two request/responses (one pair in one direction and the other in the other direction: āI want a quotation, post it here when youāre ready with itā -> āOk, will doā ā¦ āHey, hereās the quotationā -> āOoh, thanksā). (Hmmm, why do I think of RESTful things when mulling this over in my head?) You could of course go for one-way messages suspended in a SOAP solution to achieve the same effect, I guess. Hmmm, so many options, so little time.
Anyway, as an alternative to HTTP, how about transporting this stuff over other protocols, like the aforementioned SMTP (or a combination of SMTP and whatever endpoint protocol ā POP, IMAP, and so on ā you need). Or even Jabber! Both lend themselves to asynchronous interaction more than HTTP does. Or so it seems to me. Both involve to a greater or lesser degree some modicum of store-n-forward, allowing the endpoints to talk at their leisure.
Of course, this is all very high level, and based, as usual, on my ignorance of detail. But I often prefer to wonder about things rather than to know straight away which is right and which is wrong. And here, just like in the REST vs SOAP RPC debate, I donāt think there is a definitive right and wrong way. Horses for courses.
Postscript
I wrote the above at 30000 feet (or however high it was) above the English channel. Now that Iām on good old terra firma, travelling in a rickety South Central train from Victoria Station, Iāve had another thought. Revisiting the REST architectural style in extremis (whatās all this Latin doing here?) in the context of what I wrote above (ha, in both senses of the word) would be a good mental exercise and a focused way of finding out more about how it works. From what I understand, the URI is exalted as a holy pointer, being in many respects the blessed reference mechanism to the business objects that are exchanged in service provision and consumption.
I think Iāll stop now before this prose goes completely off the scale; suffice it to say that instead of the service returning a quotation, as a payload XML document in the body of the return email, it plonks it somewhere where it can be retrieved by HTTP, and sends a little notification with the URL instead.
Hmm, lots of things to think aboutā¦
]]>Why, the Panopticon, of course. An architectural figure, envisioned by Bentham, which allows one to see but not be seen. āThe Panopticonā is also the name given to a wonderful experiment in āblogger stalkingā (a phrase from BoingBoing) with avatars and a floormap of the conference area.
This Panopticonās creator, Danny OāBrien (of NTK fame), put out some instructions as to how the thing worked, and mentioned that he would be streaming the metadata out of a port on his server. He asked if anyone could regurgitate the data to a Jabber room so other clients could grab it from there rather than hammer his server, so I took up the challenge :-) This is, in essence, poor manās pubsub (again) in the spirit of load dissipation: with a ratio of, say, 1:50 (Panopticon port connections ā to ā Jabber conference room listeners) we can relieve the strain and have a bit of fun.
Ok, well it was a very quick hack. The data coming out of the server port is a stream of XML. Hmmm. Sounds familiar ;-) I quickly hacked together a library, Panopticon.pm, based loosely upon Jabber::Connection, a Perl library for building Jabber entities (XML streams flow over Jabber connections, too, yāknow). With this quick and dirty library in hand, I wrote an equally quick and dirty script, panpush.pl, which uses Panopticon.pm and Jabber::Connection to do this:
The Panopticon data is XML. Jabber is XML. So I decided the nice thing to do would be to avoid just blurting XML into the conference room ā that would be like shouting gobbledygook in a room full of people. Instead, I wrote something sensible to the room each time some data fell out of the end of the Panopticon socket (the name of the bloggerās avatar), and attached the actual Panopticon XML as an extension to the groupchat message. Hereās an example:
Panopticon produces this:
<icon id='4ee9da17f5839275ad0ca5d58c2bacaa'>
<x>456</x>
i <y>255</y>
</icon>
panpush.pl
sends this to the room:
<message to='panopticon@conf.gnu.mine.nu' type='groupchat'>
DJ Adams
<x xmlns='panopticon:icon'>
<icon id='4ee9da17f5839275ad0ca5d58c2bacaa'>
<x>456</x>
<y>255</y>
</icon>
</x>
</message>
The scary thing is that it seems to work! Grab your nearest Jabber client and enter room
panopticon@conf.gnu.mine.nu
(remember, you donāt have to have a Jabber user account on gnu.mine.nu to join a conference room there ā just use your normal Jabber account, say, at jabber.org). If itās still working, you should see āpanopticonā in that room ā thatās the panpush.pl script. When some avatar metadata changes and pops out of the Panopticon serverās port, it will appear in the room ā currently represented as the avatarās name.
Want more? Want to actually do something with the data in the room?
Well, Iāve just written an example antithesis to panpush.pl ā panclient.pl. This connects to the conference room, and listens out for packets containing the panopticon XML extensions. It just prints them out, but of course you can do with the data as you please. Itās just an example.
Oh, one more thing. As panpush.pl catches the panopticon XML and squirts it into the room, it also caches the actual avatar data, keyed by each iconās id attribute. I plan to allow queries to be sent to the āpanopticonā room occupant, probably in the form of jabber:iq:browse IQ queries, so that clients can find out about what avatars are currently around, and what properties they have (name, url, xy coordinates, and so on).
]]>To get a list of avatars in the Panopticon, you can send a query like this:
<iq type='get' to='bot@gnu.mine.nu/panopticon' id='b1'>
<query xmlns='jabber:iq:browse'/>
</iq>
The response will look something like this:
<iq type='result' from='bot@gnu.mine.nu/panopticon' to='dj@gnu.mine.nu/home' id='b1'>
<panopticon xmlns='jabber:iq:browse' jid='bot@gnu.mine.nu/panopticon' name='The Panopticon'>
<icon jid='bot@gnu.mine.nu/panopticon/2b8bf6a9e9a173f95f27ae1a8d6fb2f4'>
<name>Blammo the Clown</name>
</icon>
<icon jid='bot@gnu.mine.nu/panopticon/3ab6c14732e8937cf26db26755c4aae7'>
<name>Rael Dornfest</name>
</icon>
<icon jid='bot@gnu.mine.nu/panopticon/47e48c975621bf43fc81622265d47a31'>
<name>Dan Gillmor</name>
</icon>
...
<icon jid='bot@gnu.mine.nu/panopticon/deedbeef'>
<name>#etcon bot</name>
</icon>
</panopticon>
</iq>
(Iād originally just returned each icon without the
You can ādrill downā with a further query (sent to the JID of the icon youāre interested in ā remember, Jabber browsing is most effective when you can navigate a hierarchy of information via their nodesā JIDs) like this:
<iq type='get' id='b2' to='bot@gnu.mine.nu/panopticon/2b8bf6a9e9a173f95f27ae1a8d6fb2f4'>
<query xmlns='jabber:iq:browse'/>
</iq>
Which should hopefully elicit a response like this:
<iq type='result' to='dj@gnu.mine.nu/home' id='b2' from='bot@gnu.mine.nu/panopticon/2b8bf6a9e9a173f95f27ae1a8d6fb2f4'>
<icon xmlns='jabber:iq:browse' jid='bot@gnu.mine.nu/panopticon/2b8bf6a9e9a173f95f27ae1a8d6fb2f4' id='2b8bf6a9e9a173f95f27ae1a8d6fb2f4'>
<url>http://progressquest.com/expo.php?name=Blammo the Clown</url>
<text>Mmm... Beer Elementals</text>
<x>805</x>
<y>494</y>
<name>Blammo the Clown</name>
</icon>
</iq>
This should reflect the latest information to be had on that avatar.
]]>So Morbus told me what the links should look like, and I just added them to my crontabād script that produces the My Feeds list from Blosxomās rss.dat file.
You can see the result in the form of the [A] links in the list ā click on these if youāre running Amphetadesk!
]]>Letās seeā¦
P.S. Thanks Jon for the email alert :-)
]]>A: The ājabberconfā Blaggplug ā a plugin for Blagg that pushes RSS item info to a Jabber conference room (akin to an IRC channel) as theyāre pulled in the aggregation process.
This idea goes back a long way, to my pre-Jabber days (!) when I was experimenting with getting my business applications to write messages to IRC channels and writing various IRC bots to listen out for and act upon specific messages (carrying out simple processes, relaying messages to further channels, and so on).
Just as HTTP-GET function call based apps ā such as the āopen wire serviceā Meerkat, and other RESTian applications (āRESTfulā may twang too many antennae ;-) ā are both human and machine friendly, so is simple publish/subscribe via spaces. Just as people (with browsers & URL lines), and applications (with HTTP libraries) can get at HTTP-GET based information, so people (with Jabber groupchat clients), and applications (with Jabber libraries), can get at published data in open spaces such as Jabber rooms.
The plugin is very raw, as Iāve just written it tonight and done some minimal testing using my current feedlist.
Have fun!
]]>(Leslie has been doing stuff too, but his main site is suffering an outage at the moment and I canāt get to the right link ā get well soon, 0xDECAFBAD!). While reading, Iāve been playing around a bit too, and have a little script which is fed a ātailāed access_log and looks for referers, grabbing the titles of their pages if possible (using an Orcish maneuver-like mechanism to cache page titles and be a good HTTP citizen).
I run this script in the background:
nohup tail -f access_log | perl lpwc >refer.list 2>refer.log &
and then periodically pull the last ten unique referers and create a nice list that I can then SSInclude in this weblog:
uniq refer.list | tail | perl referers.pl > referers.incl
If nothing else, it reminds me of how powerful *nix command line tools and the humble pipe can be.
]]>["im://jabber/bull@mancuso.org"].examples.getStateName (12)
By āeck, it takes me backā¦
A hearty congrats to Dave (and Jeremy and Eric of course). āThis Bing!ās for you.ā
]]>I wrote earlier about hacking support for lists into Blosxom, so I could generate simple list files and Blosxom would format them nicely and fold them into the blog output for me. I decided that I wanted to go on using vanilla Blosxom, rather than a custom one, and achieve the list display another way. So Iāve turned things around, and am now running Blosxom as nature intended, and have included the output in the framework you see here via SSI. The rest of the lists and tables are also SSIncluded files.
The main drive for this change was to be able to more easily incorporate the latest āhackā which you can see to the right ā a calendar showing the current month, with links to posts on relevant days. This idea is of course not new, and can be seen on many a Radio Userland powered weblog. The calendar here is generated by HTMLifying the output of ācalā and looking through Blosxomās data directory at the timestamps of the .txt files Iāve created. Simple.
As you can see, this blogās location ā
is new, and easy to remember. The content and CGI scripts are actually served from my ~dj directory on www.pipetree.com; the āqmacroā name is masking this by means of some AliasMatch directives in our Apache web server.
]]>Itās simply because I can.
]]>What a rich seam of ideas and mind-stimulation there is to be mined within this community. And no coal dust, dank conditions, and canaries in sight. Iāll drink to that.
]]>rsync -tazve ssh ~/blog/*.txt gnu:blog/
and stuck the alias definition into my .bashrc file Hey presto. My trusty Linux Vaio is always with me, and I can blog there wherever and whenever I wish. The ānet cafe at London Gatwick airport was getting a bit too pricey for terminal access, but I passed it last week to discover that it now offers simple 10baseT connections for laptop users. Five quid for 40 minutes. Still steep, but perhaps worth it for the odd time I absolutely desperately must sync up.
]]>The point of the 5335 link is that the target is 127.0.0.1, that is, localhost. This time, I didnāt want to run a script of any significant size on my localhost; rather, I thought that if I could just run a simple redirector, which I could configure and get to redirect calls to 127.0.0.1:5335 to a location of my choosing, Iād be able to concentrate running the ācomplicatedā (relative term) part of adding a feed to Blaggās rss.dat, on the host that serves my weblog.
So 5335-redir.pl is a simple, configurable redirector, which I run on my localhost, and bladder is the script that receives a feed URL (via the redirector) and adds it to rss.dat.
Very simple. In the spirit of Blosxom and Blagg, I hope.
Hereās a step-by-step list of what happens:
Itās very simple; Blagg uses a text file, rss.dat, to keep a list of RSS feeds that I want to subscribe to. I wrote a simple script to read that information and to create a list file that I can then pull into the weblog template like the other lists here. Very simple. Iāll probably cron the script to run every so often, to keep the list up to date.
Now all I need to do is resurrect the [5335](file:///testwiki/5335) script so that I can other feed info inserted into (appended onto, probably) Blaggās rss.dat file when I click one of those coffee mug icons.
]]>Iām sure itās because Blosxomās scent of simplicity has people doing what they want, with code, because thereās only the slightest whiff of required compliance, to fit in with how Blosxom works, and thatās something that all platforms, languages, and minds share: files.
]]>With a simple script, I just parse Galeonās bookmark file to pull out the items in the āto readā folder, and make them into a simple list, which I then drop into my file area reserved for Blosxom. A simple reference to this list in the template, and ecce, I have my āto readā list available on every workstation I sit down at.
]]>Iām really taken with the simplicity of Blosxom. As the weblog entries are simply textfiles, I think Iām going to start using CVS to give me the ability to write offline too. I travel a lot, and my trust Sony Linux laptop goes everywhere with me.
]]>It's reproduced below.
Everyone in the Jabber world seems to have a nickname -- how did you get yours?
Well, it comes from the world of SAP. Prehistoric SAP, that is, from when I was working with R/2 in the late eighties. In those days, it was nearly all in (S/370) assembler - none of this newfangled ABAP language thank you very much :-) A Q-macro is an assembler representation of a logical file structure, defined in the SAP system's data dictionary. The names were 5 characters in length, for example QKONP, which was the name of the macro that represented the field of the contract document item structure. I picked the name 'qmacro' as it reminded me of happy times, and it was unlikely that it would clash with anyone else's.
Where do you live now, and what interesting places have you lived in the past? Well, I split my time between England and Germany. I work in Brentford, West London, and live in Hailsham, in East Sussex, near the south coast. I also spend time in Krefeld, Germany. Actually, when I think about it, I spend most of my time somewhere between the two. I grew up in Manchester, but moved to London when I went to university there.
I was lucky enough to have a job (hacking SAP) that took me to lots of places; I've lived and / or worked in plenty of places in my time - Hamburg, DĆ¼sseldorf, Bonn, Heidelberg (Germany), Paris, Strasbourg (France), Brussels (Belgium), Copenhagen (Denmark), Rotterdam (Holland), NY state (USA), and plenty of places in the UK. To paraphrase some song lyrics, "wherever I lay my laptop, that's my home".
When and how did you first get involved in Jabber? I remember it well. It was the autumn of 2000. I was working in London, in a small, cramped, and hot office. I was getting hot under the collar trying to install this new open XML-based messaging system called 'Jabber'. I'd been hacking around with IRC, and wanted to see what this new system was like. As I've mentioned elsewhere, my head was full of XML-RPC stuff, IRC bots, and system-to-system messaging. Although at the time Jabber was being touted as an IM system, I was intrigued.
I almost gave up at the start, in that it took me over a day to get it installed and working, with all the individual library dependencies that were required at the time - I think it was the 1.0 version of the jabber.org server - and the initially cryptic configuration file. But I persevered, and I'm glad I did. (The fact that I found the configuration file initially cryptic is one of the reasons why I focus so much on helping the reader of "Programming Jabber" to understand how the configuration works - that's why Chapter 4 is so, well, long :-)
I was very much a Perl enthusiast at the time (and still am!) and I got stuck into the Net::Jabber modules almost immediately.
Which Jabber clients do you use or like the most? It has to be Jarl. It's great - does what it says on the label, appeals to my graphical good taste, and I can hack bits onto it in Perl. I've used Gabber too - that's a nice client, but overall I prefer Jarl. I've read enough of Ryan's code in the Net::Jabber modules to understand his coding style, so I feel at home inside Jarl.
When I'm on the move, and have only got an ssh window available, then of course I use sjabber to join meetings and conferences.
You're well-known in the Jabber community for having written a book about Jabber -- how did that come about and what did you learn in the process? You mean I'm not well known for my good looks and charm? ;-) Ok, well, you can't win 'em all. Seriously, though, about the book. It was early 2001, I'd just had a couple of Fun With Jabber articles published on the O'Reilly Network, and had a few other articles and bits of documentation relating to Jabber out there too. A chap from O'Reilly called Chuck Toporek, who later was to be my editor, called me. To cut a long story short, after I had bored him half to death with the sort of things someone could write about Jabber, he asked me if I wanted to write it. I almost fell off my chair.
If there's one thing I learned, it's that if a publisher asks you if you would like to write a book, you say "YES!". It was the opportunity of a lifetime for me. I had to grab it with both hands. I had to submit a detailed proposal, which outlined the book's content, and so on. This was actually a lot harder than I thought it was going to be, as I had to essentially write the book in my head up front, so to speak, so I could work out what was going to be where. It really paid off, though, as the outline was my road map while writing. I'd have been totally lost without it.
I quit my job to write it; I wanted to enjoy the experience as much as I could. The word 'enjoy' is relative, though. Although ultimately rewarding, writing the book was very hard work. I got to know the jabber.org server codebase quite well, as I used to pore over it while drinking in a coffee shop just up the road.
What do you consider some of your most important contributions to Jabber? Gosh, I dunno. I've written quite a bit of documentation, and have tried to help out on the mailing lists wherever and whenever I could. I enjoy 'promoting' Jabber, writing articles and giving talks to anyone who will listen :-) I also have a few Jabber modules on CPAN under my belt - Jabber::Connection being the most important, I guess.
What projects or code are you working on nowadays? Well, at the moment, I'm working with the Radio Userland community to bring Jabber in to the loop to solve various issues relating to the addressing of endpoints that are behind firewall/NAT mechanisms. We're got a 'bridge' that sits in between Radio Userland desktops and the so-called Radio Community Server mechanism; this bridge translates between XML-RPC and Jabber traffic to carry weblog update notification requests and pings. We're using packets based on the pubsub JEP 0024 to achieve this. It's working really well.
I'm also experimenting with Peerkat, a personal weblog mechanism, and have added some Jabber pubsub juice to that too. There's a lot of really interesting stuff out there that is just crying out for integration with Jabber.
Other than that, I'm keeping an eye on the 1.5 development work, so I'll be in a position to explain bits and pieces to anyone who expresses an interest.
What are your favorite programming languages? Well, it's got to be Perl, of course. What a silly question :-) Actually, Python has been growing on me too, ever since I hacked on bits of jabberpy early last year. Any language that's not too difficult for me to (a) understand and (b) do anything useful without jumping through hoops is fine by me. I've still got a soft spot for Atom Basic, a very odd dialect of Basic on the Acorn Atom. You could inline 6502 assember in your code, too!
What's your favorite music to code by? Oh, I've rather an eclectic taste, I'm afraid. Naming any one artist would be misrepresentative of my favourites, so here's a random list of stuff I've been listening to at the keyboard recently: Electric Light Orchestra, Talvin Singh, The Smiths, Violent Femmes, Natalie Merchant, Bentley Rhythm Ace, Daft Punk, James Taylor Quartet, Led Zeppelin, Grateful Dead, Motorhead, Manu Dibango. I also like listening to BBC Radio 4.
What hobbies do you pursue when you're not working on Jabber? Most of my hobbies are orientated towards our son, so right now you'll find me kite flying, bicycle riding, and building stuff with Lego. I do like cooking, it's a great way to relax and switch the rest of the world off. Joseph and I especially enjoy making our own pasta, cakes, and pies. Anything that involves a lot of mess, basically.
Do you have a website or weblog where people can learn more about you? Well, not really. I've never been that good at maintaining that sort of thing. I do have a Jabber-related site, which is at http://www.pipetree.com/jabber/. My experiments with Peerkat, in the form of a weblog, can be found at http://www.pipetree.com:8080. I did have a Geocites website for my family, but it's disappeared. Our son Joseph had a website up within 60 hours of being born :-)
What do you think are the most important strengths of Jabber? The fact that the protocol is open, and flexible. It doesn't try to do too much - it just gives you the building blocks to construct your own solutions. Indeed, look at the activity in the standards-jig mailing list, where people are coming up with extensions to the protocol left right and centre. Furthermore, because the basics are straightforward, it's easy to wrap your head around it and get going with solutions without much ado.
I think the fact that the Jabber development community is a friendly place to be, despite any clashes that might occur, is a very important aspect too. Life's too short already not to enjoy what you're doing.
What are some of the weaknesses you think need to be addressed? Weakness? Bah, that word isn't in my vocabulary :-) But if I had to pick one, it would be Jabber's name, and what people think it stands for. Jabber is not just IM, as we all know. The situation is improving, in that people are beginning to 'get it' and think of Jabber in more general XML-based messaging terms, but it's been a long hard slog to get here.
What cool applications would you like to see built using Jabber? Well, there's a ton of potential in the business world (where I come in from). Getting Jabber into commerce related projects is what I'm interested in. There's so much data to manage and move around in the business world, and an XML-aware transport like Jabber seems to be an ideal fit. I've already built some demo scenarios, bolting Jabber (and other Open Source tools) onto SAP R/3, and it looks very promising.
What other computing projects do you most admire, and why? Hmmm. A difficult one. I'm not really sure. I'm not that much of a follower of computing projects per se; I'm more a user of their products :-) Rather than projects, there are many people that I admire. Take the Perl community, for example. As well as the gods who make Perl tick, there are people closer to userspace (and therefore closer to me) that are doing fantastic things to further Perl. Matt (XML), Brian (Inline) and Damian (mind stretching) to name but a few.
What can the Jabber community do to improve? There's been a lot of effort to get process built into the community and the Foundation; JIGs, JEPs, and the council, for example. We need to make sure that the time and effort invested in setting these things up is not wasted; we need to see that the processes put in place are processes that work. We need to communicate more (who doesn't?) and we have to constantly work on the relations(hips) with other parties.
Where would you like Jabber to be two years from now? Everywhere.
:-)
]]>I grabbed the latest version, 0+3i, and hacked in a bit of support for simple lists, such as you see on the right hand side of this page. Just as you write blog entries by editing .txt files, where the first line in the file becomes the entryās title, so you create lists by editing .list files, where the first line in the file becomes the heading for the list. You include a list in your template by including the listās name in square brackets, like this: [listname].
Because lists are just files, you can generate them in lots of ways. The Google search list on the right was created using a slightly modified version of Matt Webbās GoogleSearch.pl script. Iāll probably cron the script to search Google every hour. See ā powering Blosxom with standard tools like cron. Lovely.
]]>
- Do you use your IM client so much that you feel like you live in it? Would you like to use it for tasks other than one-to-one communication ā say, for looking up Web pages? *
are in line with what I wrote about in an article about Jabber and bots called āIs Jabberās Chatbot the Command Line of the Future?ā.
I do like the idea of permanent windows opened and connected to bots providing services.
]]>