Logipard
1.0.0
EN
RU
This document is arranged as a static single page, but it is also equipped with some reading UI facilities you may find unfamiliar. So, before we start, some words on what you see on the page.
Topics are often nested to a number of levels, and if you scroll around you can notice the headers are sticky, allowing you to keep track on the context structure. Clicking on a header will scroll you back to start of that topic. Also, arrows to the left of a header allow you to jump back and forth between topics of the same nesting level.
Text fragments outlined like this: ✖ Snapback and other actions are Logipard links to some other topics on this page. Unlike plain HTML links, they don't throw you to the new location on click. Instead, the linked item is unfolded inline, allowing you to preview the target without loss of visual context (try clicking it now to see how it looks).
Probably it isn't too impressive with this particular example, when the item to unfold is within the view already, but it gets more convenient if the target is far away.
The item is unfolded in brief mode, showing only the basic information. You can click "More" to see more. If you have seen enough and want to restore the text disrupted by the unfolding, use Snapback action.
Note that an item only exists on the page in a single instance, so, when unfolded anywhere, it is removed from its previous location. You may see a prompt to unfold it back when you get to its primary placeholder.
  • Alternative to Snapback, which gets the item back to its home location and leaves the view at the same place where you clicked the link, you can Snapback & Scroll to snap the item back and scroll the view to it, sort of completing the process you would get from clicking a plain HTML link.
  • If you leave items unfolded at guest locations, you may notice Reset action enabled on headers of the topics they belonged to - this basically is quick snapback for all the items moved from under the topic.
  • Finally, you can Elevate the unfolded item, that is, to bring up inline one of its parent topics to see more of the context (may not always be available or practical due to document structure).
  • #LP? action allows to obtain the item's name in the documentation model. (This wording will make sense after you get familiar with Logipard concepts.)
In this section we'll explain what Logipard is and how to approach it.

Motivation

The existing annotation-based documentation generators often fall short in key areas:
  1. Limited Scope of Documentation
    Most tools generate documentation focused solely on code objects (e.g., classes, modules, functions). However, they lack support for higher-level documentation, such as:
    • Detached quickstart guides and common use cases.
    • Documenting in terms of domain-specific objects and workflows that may have no simple 1:1 mapping to code objects.
    • Embedded developer notes, task references, custom metadata, or other sort of non-standard information.
  2. Language-Specific Constraints
    These tools are typically tied to specific programming languages, making it difficult to document:
    • Higher-level or domain-specific objects (e.g., scripts, resources, utility tools).
    • Relationships between objects across different domains (e.g., host language and scripting system). Additionally, annotation placement is often restricted, limiting flexibility in organizing and structuring documentation sources.
  3. Rigid Output Formats
    Generated documentation is usually end-user-oriented and sealed, making it difficult to:
    • Link or cross-reference with other documentation artifacts.
    • Use machine-readable formats (e.g., XML, JSON) effectively, even if a tool is capable of generating them, as they often retain the same hardcoded rigid structure as human-readable outputs.

How Logipard Addresses These Issues

Logipard introduces a novel approach to documentation generation, centered around two key concepts:
  1. Freeform Documentation Object Model (FDOM) Logipard collects all documentation fragments into a single project-wide database called the Freeform Documentation Object Model (FDOM). FDOM is designed to be:
    • Flexible: It imposes no restrictions on documentation structure or semantics of document items, allowing you to include software objects, user manual chapters, developer notes, or whatever else, or any mix of the above, and organize them into any structure you see appropriate.
    • Language-Agnostic: It supports documentation from multiple sources, regardless of the programming language or domain.
    • Cross-Referencing Enabled: FDOM provides a unified naming space, enabling seamless cross-referencing between different parts of the documentation (e. g., between a field in a table creation SQL and its backing objects in business logic code).
  2. Tool-Oriented Documentation Artifact FDOM is a machine-readable, tool-oriented artifact rather than an end-user-oriented one. It serves as the primary documentation information source, which can be queried and sliced in various ways. End-user documentation (e.g., API references, user manuals) is generated by generators - sub-tools that read relevant slices from FDOM, organize them according to the target document's structure, and produce the final output.

Key Benefits of Logipard

  • Unified Documentation Data Store: FDOM consolidates all documentation fragments into a single, flexible model.
  • Language and Domain Independence: Document code objects, scripts, workflows, and more, all in one place.
  • Enhanced Cross-Referencing: Easily link related objects across different domains or documentation types.
  • Customizable Output: Generate tailored documentation artifacts for different purposes (e.g., developer notes, end-user documentation).
Now let's get to know the flow...
Install Node.js 13 or higher.
Then, install Logipard CLI either globally:
# from gitverse
npm install -g git+https://gitverse.ru/mikle33/logipard

# or: from github
npm install -g git+https://github.com/sbtrn-devil/logipard

# or: from npm
# TODO
or locally into your current folder:
# from gitverse
npm install git+https://gitverse.ru/mikle33/logipard

# or: from github
npm install git+https://github.com/sbtrn-devil/logipard

# or: from npm
# TODO
After global installation, the Logipard CLI can be run from anywhere:
lp-cli [cmd line options]
After local installation, the Logipard CLI can be run from the current folder:
Linux:
node_modules/.bin/lp-cli [cmd line options]
Windows:
node_modules\.bin\lp-cli [cmd line options]
Run lp-cli without options or with -h or --help for summary of the command line options (they are quite few, most of the work is intended to be specified via configuration file(s)).
After installation you can go through ✖ A quickstart example to get a grip on how things are done.
For example, you are going to write an innovative library to make a revolution in the programming world.
You have created a directory for the project to live in (let's refer to it as <PROJECT_ROOT>), possibly even created a package.json in it (let's say it is a Node.js library).
Also you have already installed Logipard CLI as described in ✖ Installation - we'll assume the CLI command is lp-cli [options] (adjust it if you installed locally).
Create the main library codebase:
Create file: <PROJECT_ROOT>/index.js
module.exports.leftpad = function leftpad(str, len, padding = ' ') {
	str = str + '';
	if (str.length >= len) return str;
	padding = (padding = padding + '').length ? padding : ' ';
	while (padding.length + str.length < len) {
		padding = padding + padding;
	}
	return padding.substring(0, len - str.length) + str;
}
Now we can start documenting our stuff.
The primary source for documentation is source code annotations. In our case, we can add some this way:
Edit file: <PROJECT_ROOT>/index.js
//#LP functions/leftpad { <#./%title: leftpad(str, len[, padding])#>
// Pad a string from left to the given minimum length with whitespaces or a given padding string.
//#LP ./str %arg: String to be padded
//#LP ./len %arg: Number, minimum required length of the padded string (<#ref leftpad/str#>)
//#LP ./padding %arg: String, optional. If specified, this string or its portion will be used for the padding instead of spaces.
//
// The string is repeated as required if it is less than the length required for the padding, and its leftmost part of the required length is used.
module.exports.leftpad = function leftpad(str, len, padding = ' ') {
	... // almost everything within can be left as is so far

	//#LP ./%return: The padded string with minimum length of (<#ref leftpad/len#>)
	return padding.substring(0, len - str.length) + str;
}

//#LP }
The annotations look mostly readable and probably almost self-descriptive (not counting some fancy syntax details, which we won't be getting into right now), but there is a bit more to it than meets the eye and that should be mentioned.
First, a technical note: Logipard recognizes as annotations the contiguous sequences of single-line comments (// here), possibly with some code at beginning of the line, where first comment starts with #LP token, and until the next line that does not end with single-line comment. Here, the // line after ./padding is required to keep the // The string is ... comment included into the run. On the other hand, the // almost everything ... comment line is separated from the run by non-comment ending line, so this comment is 'just a code comment' and is ignored.
Keep this in check to ensure you will have the related comments included, and unrelated comments not included.
Second, Logipard is entirely agnostic on the source code and any language entities. All that matters is the documentation model structure, and this structure is entirely up to you to determine and to follow consistently.
For example, in our documentation we opted for the following items with the following names:
  • functions: the item automatically introduced by fact we introduced its member sub-items, we'll consider it a container for the functions list
  • functions/leftpad: the primary documentation item for our function, contains everything related to it (i. e. the following items)
  • functions/leftpad/%title (comes from ./%title): the item that contains human readable title for the main item
  • functions/leftpad/str, functions/leftpad/len, functions/leftpad/padding: items documenting arguments of the function, note each of them is tagged with %arg model tag
  • functions/leftpad/%return: the item that documents our function's returned value
  • %arg: the model tag item with this name, introduced by the fact it was used
Note that any semantic meaning for the items (functions list container, function's main item, items for function's readable title and return value, the fact we tag the arguments specifically with %arg, the fact that the documented entity is specifically a function) is also entirely conventional, as well as their names. From Logipard documentation model perspective, all of these are just generic items, and at this point their interpretation in the way described above only exists in our mind.
Convention in your own use cases may be different, but for purpose of our quickstart let's adhere to this one.
Another point is that location of the annotations is not actually important - choose it for your convenience. Here we placed it near the documented function declaration, similar to how it is done for conventional doc generators like Javadoc, TSDoc, Doxygen, etc. (And even here, we opted to put the %return fragment near the return statement rather than bundled with parameters block - again, purely for example; we could as well place it more traditionally.) In general, however, you could place it anywhere in the source file, or in a different file, or even place different sub-items across different locations. Probably it makes not much sense for things like function arguments and return value, but these may be not the only items related to this function you may want to document under this node.
Although there are no inherent restrictions on choosing structure and naming items in the document model, there are a few points to take for keeping good style and manageability.
  • the short names for items that are supposed to be auxiliary data fields or to be used as tags should start from %. Fortunately, we have this already.
  • the documentation items specific to your project/module/library should be placed under a single root item with a sufficiently unique name specific to this project (project domain). This will help you to avoid name conflicts if you integrate several projects and to be able to merge their documentation easier.
Here, names we picked for our items are not the best choice. We'd better go with something like:
  • domain.our-unique.leftpad.for-nodejs/functions
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad/%title
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad/str
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad/len
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad/padding
  • domain.our-unique.leftpad.for-nodejs/functions/leftpad/%return
  • %arg (this can be left as is, for some reasons)
It is surely impractical to type the domain item prefix every time we may need it (even though we don't need it too often). For this purpose, it is better to define an alias. Let's do this:
Edit file: <PROJECT_ROOT>/index.js
//#LP-alias M: domain.our-unique.leftpad.for-nodejs

// now instead of domain.our-unique.leftpad.for-nodejs/functions/... we can use M/functions

//#LP M/functions/leftpad { <#./%title: leftpad(str, len[, padding])#>
...
// ...the remaining part of the file can be left unchanged, since it uses relative names
Furthermore, that //#LP-alias M: domain.our-unique.leftpad.for-nodejs will likely be the shared common part for all files that contain documentation for our project, and possibly more shared prologue parts will occur eventually. To further reduce duplication, we can move this to separate file and just include it everywhere we need...
Create file: <PROJECT_ROOT>/leftpad-module-inc.lp-txt
#LP-alias M: domain.our-unique.leftpad.for-nodejs
(Note a bit different structure for this file. As Logipard is language agnostic, even standalone text files purely consisting of Logipard annotations are an option.)
Edit file: <PROJECT_ROOT>/index.js
//#LP-include leftpad-module-inc.lp-txt

//#LP M/functions/leftpad { <#./%title: leftpad(str, len[, padding])#>
...
// ...the remaining part of the file still unchanged
In this quickstart we are making a HTML documentation, and first thing we will need for this is to prepare a HTML template file.
Create file: <PROJECT_ROOT>/leftpad-doc-html.tpl.html
<html>
<head>
<style>
body {
	font-family: sans-serif;
}
code {
	font-family: monospace;
	background: lightgray;
	margin: 0;
	padding: 0;
}
pre {
	font-family: monospace;
	background: lightgray;
	margin: 0;
	padding: 0;
}
table, th, td { border: 1px solid; border-spacing: 0; border-collapse: collapse; }
table { margin: 0.5em }
CSS_TARGET
</style>
</head>
<body>
<div style="display: flex; flex-direction: column; height: 100%; overflow: hidden">

<div style="border-bottom: double 3px">
<center style="font-size: 200%">leftpad for node.js</center>
<center style="font-size: 75%">1.0.0</center>
</div>

<div style="padding: 0; margin: 0; overflow: clip; height: 0; flex-grow: 1">
HTML_TARGET
</div>

</div>
</body>
</html>
This template is purely for example, and includes a bare minimum of items possible. Note the CSS_TARGET and HTML_TARGET placeholders.
Now, to proceed to something more substantial, we need to prepare Logipard configuration file. We'll not explain right now most of the magic happening here, just keep in mind that it is where you reify your conventions on document model, and specify the other technical stuff.
Create file: <PROJECT_ROOT>/lp-config.json
//#charset utf-8
// (That comment annotation above is taken into consideration, hopefully you will be saving this file in UTF-8)
// This doesn't quite look like a valid JSON (including the comments), but don't bother for now,
// just copy and paste everything as is.
{
	"+ config": {
	},

	"lp-extract": {
		"+ config": {
			// note that non-absolute paths are relative to project root (which is location of this config file)
			"+ excludeInFiles": ["node_modules/**"]
		},
		"items": [
			{
				// section for the primary codbase, in our case it is all the JS files
				"inFiles": ["**/*.js"], 
				"excludeInFiles": [],
				"outDir": "lp-extract.gen",
				"reader": "${LP_HOME}/lpxread-basic" $, // trailing $ is not a typo
				"lpxread-basic": {
					"srcType": "generic-c-like"
				}
			},
			{
				// remember the leftpad-module-inc.lp-txt? it falls under this section
				"inFiles": ["**/*-inc.lp-txt"],
				"excludeInFiles": [],
				"forLPInclude": true,
				"outDir": "lp-extract.gen/lp-includes",
				"reader": "${LP_HOME}/lpxread-basic" $,
				"lpxread-basic": {
					"srcType": "lp-text"
				}
			}
		]
	},

	"lp-compile": {
		"+ config": {
		},
		"items": [
			{
				"inRootDir": "lp-extract.gen",
				"lpIncLookupDirName": "lp-includes",
				"writer": "${LP_HOME}/lpcwrite-basic-json" $,
				"lpcwrite-basic-json": {
					// you may want to customize this for your project name
					"outFile": "lp-compile.gen/leftpad-doc-fdom.json"
				}
			}
		]
	},

	"lp-generate": {
		"+ config": {
		},
		"items": [
			{
				"inFile": "lp-compile.gen/leftpad-doc-fdom.json", // same as outFile in lp-compile section
				"writer": "${LP_HOME}/lpgwrite-example" $,
				"lpgwrite-example": {
					// very much magic here, just paste it with no hesitation
					"program": file("${LP_HOME}/lpgwrite-example-docprg.lpson" $, {
						"docprgPrologue": [ { "nameAlias": "M", "name": "domain.our-unique.leftpad.for-nodejs" } ],
						"docRootItems": {
							"query": [{ "with": "M/functions" }],
							"sort": { "byMember": "%order", "keyFormat": "ds-natural", "order": "asc" }
						},
						"LS_EXTENDS": "Extends (is a)",
						"LS_MEMBERS": "Members",
						"LS_NAME": "Name",
						"LS_DESCRIPTION": "Description",
						"LS_MEMBERS_FROM_EXTENTS": "Members from extents",
						"LS_ARGUMENTS": "Arguments",
						"LS_RETURNS": "Returns:",
						"LS_ERRORS": "Errors:",
						"LS_MEMBERS_DETAILED": "Members (detailed)",
						"LS_MEMBERS_FROM_EXTENTS_DETAILED": "Members from extents (detailed)",
						"LS_ARGUMENTS_DETAILED": "Arguments (detailed)",
						"LS_NOTES": "Notes",
						"LS_PROPERTIES": "Properties",
						"LS_PROPERTIES_FROM_EXTENTS": "Properties from extents",
						"LS_METHODS": "Methods",
						"LS_METHODS_FROM_EXTENTS": "Methods from extents"
					}),
					"renders": [
						{
							"docModel": "DocMain",
							"renderer": "${LP_HOME}/lpgwrite-example-render-html" $,
							"lpgwrite-example-render-html": {
								"outFile": "lp-generate.gen/leftpad-doc.html",
								"emitToc": true,
								"inTemplateFile": "leftpad-doc-html.tpl.html",
								"htmlPlaceholder": "HTML_TARGET",
								"cssPlaceholder": "CSS_TARGET",
								"localizedKeywords": {
									"SNAPBACK": "Snapback",
									"SNAPBACK_AND_SCROLL": "Snapback & Scroll",
									"ELEVATE": "Elevate",
									"RESET": "Reset",
									"ELEVATE_TO": "Elevate to...",
									"COPY_ITEM_NAME": "Copy this item's LP FDOM full name to clipboard:",
									"ITEM_UNFOLDED_ELSEWHERE": "Item unfolded elsewhere on page, click/tap to unfold here...",
									"MORE": "More... >>",
									"TABLE_OF_CONTENTS": "Table of contents"
								}
							}
						}
					]
				}
			}
		]
	}
}
We are all set to generate our quickstart project's documentation page.
Assuming your current work directory is <PROJECT_ROOT> and your current user has write permissions in it, invoke the CLI:
lp-cli lp-config.json
You should see some output, which, in case of success, is like this:
=== Performing stage: EXTRACT ===
EXTRACT: 15.039ms
=== Performing stage: COMPILE ===
COMPILE: 18.592ms
=== Performing stage: GENERATE ===
Start render
lpgwrite-example-render-html: file lp-generate.gen/leftpad-doc.html created
GENERATE: 69.098ms
Several new directories should appear, including lp-generate.gen, which should contain leftpad-doc.html file. It is our documentation page, ready to view via browser. Isn't it nice? An extra note: it is a completely self-contained HTML file, you can move it around alone with no fear to lose some dependencies, and it is static and indexation friendly when hosted on web.
Also take a look into folder lp-compile.gen and file leftpad-doc-fdom.json in it. It is your documentation model DB (in JSON form, for this instance).
Although it looks less impressive and hardly human readable, and is in fact an intermediate artifact you can disregard most of the time, it is actually the core item in Logipard paradigm. This DB is assumed to hold all LP documentation fragments from across the project in the same single space. Then there can be multiple final documents, made up of appropriate slices of this DB, but they all will share it as the data source.
Let's proceed with the quickstart and see how this can work in our case.
Now that we have our library code finalized, we would like to supply something like README file. And, ideally, have it both as standalone README.md and as a part of main HTML page.
Let's start by making a source file...
Create file: <PROJECT_ROOT>/readme.lp-txt
#LP-include leftpad-module-inc.lp-txt
#-LP note the syntax difference when we are using plain text-ish files
# by the way, fragments opened by #-LP are treated as non-annotation comments by Logipard
# and will not in make it into DB and documentation, they span until next #LP tag.

# So this paragraph is still a LP comment (the hashes are actually optional, but the visual style
# better be consistent). Also note that consistent indentation (same amount of same type of whitespaces
# at start of each interim line within the #LP scope) is handled gracefully.
#LP M/readme { <#./%title: leftpad: quickstart#>

String left pad, revised and improved.
#LP ./install { <#./%title: Install#> <#./%order: 1#>
```
$ npm install git+https://<whereismygit.com>/leftpad
# TODO: actual location
```
#LP }
#LP ./usage { <#./%title: Usage#> <#./%order: 2#>
Use it this way:
```
const { leftpad } = require('./leftpad');

leftpad('foo', 5);
// '  foo'

leftpad('foo', 2);
// 'foo'

leftpad('foo', 10, '+-=');
// '+-=+-=+foo'
```
#LP }
#-LP By the way, it is a good idea to add reference to the usage under the documented function hub...
#LP M/functions/leftpad/usage { <#./%title: Usage#>
See some usage examples under <#ref readme/usage#>.
#LP }
#-LP And also give the functions section some official stuffing...
#LP M/functions: <#./%title: Functions reference#>
Reference on the library functions.
#-LP Be careful though of one caveat when using multi-line #LP...: syntax: its scope is terminated with next non-<#...#>'d #LP tag or #-LP comment
# So lines after this comment are again in M/readme (and the comment itself, in turn, only ends with an #LP tag)
#LP ./versions { <#./%title: Versions summary#> <#./%order: 3#>
#LP ./1.0.0: Initial release version.
#LP ./0.0.9: Pre-release version.

Was not documented with LP, so it pretty sucked.
#LP }
#LP }
Then let's add this...
Edit file: <PROJECT_ROOT>/lp-config.json
...
	"lp-extract": {
	... // under "items"...
		"items": [
			// add third item (to capture the new readme.lp-txt):
			...,
			{
				"inFiles": ["**/*.lp-txt"],
				"excludeInFiles": [],
				"outDir": "lp-extract.gen",
				"reader": "${LP_HOME}/lpxread-basic" $,
				"lpxread-basic": {
					"srcType": "lp-text"
				}
			}
		]
	},
...
	"lp-generate": {
	... // in the first (and so far the only) item under "items"...
		"items": [
			{
				"inFile": "lp-compile.gen/leftpad-doc-fdom.json",
				"writer": "${LP_HOME}/lpgwrite-example" $,
				"lpgwrite-example": {
... // leave most as is, except for....
						"docRootItems": {
							"query": [{ "with": ["M/readme", "M/functions"] }], // <-- change "query" member to this
							],
... // everything here remains as is
						},
... // here as well
			},
			// then, add this second item to "items":
			{
				"inFile": "lp-compile.gen/leftpad-doc-fdom.json", // note, still same as outFile in lp-compile section
				"writer": "${LP_HOME}/lpgwrite-example" $,
				"lpgwrite-example": {
					"program": file("${LP_HOME}/lpgwrite-example-docprg.lpson" $, {
						"docprgPrologue": [ { "nameAlias": "M", "name": "domain.our-unique.leftpad.for-nodejs" } ],
						"docRootItems": {
							"query": [{ "with": ["M/readme"] }],
							],
							"sort": { "byMember": "%order", "keyFormat": "natural", "order": "asc" }
						}
					}),
					"renders": [
						{
							"docModel": "DocMain",
							"renderer": "${LP_HOME}/lpgwrite-example-render-md" $,
							"lpgwrite-example-render-md": {
								"outFile": "lp-generate.gen/leftpad-README.md",
								"emitToc": true,
								"addSourceRef": false,
								"header": "# leftpad #\n\n---",
							}
						}
					]
				}
			}
		// everything else remains as is
		}
...

Run LP pipeline again:
lp-cli lp-config.json
Now, check again lp-generate.gen: you should see new file leftpad-README.md (check it with some MD viewer you have at hand), and the file leftpad-doc.html has updated - now it includes the same information as in readme, while still featuring the functions reference, and you can see some other improvements you might have guessed from the readme source.
This is how the basic documentation tasks are done with Logipard, and possibly more than sufficient for everyday needs. But there are much more things you can do within its framework and with the documentation model, possibly not even limited to simple documenting. In order to see more capabilities, and to get better understanding of what we've done in the quickstart, check the main documentation.
Also, as an example of a more complex project documented with Logipard, feel free to explore Logipard code itself. All of the documentation sources are intentionally retained in its package.
Now we can describe ideas and concepts of Logipard in more detail.
First of all, you should get acquainted with ✖ Freeform Documentation Object Model , as it is the comprehension of the documentation you will be working in, and it pervades Logipard pipeline in every aspect, from annotations in the documented source code to interface of Logipard customization toolkits.
Speaking of the pipeline, Logipard's documentation generation process is organized like this:
image
As it shows, the pipeline assumes user customization at all of the key points. Although Logipard comes with some built-ins to get started, and you don't have to delve into it right away, be aware that the possibilities are much wider.
Get to know more about each pipeline stage: ✖ Logipard Pipeline Stages
All details of the pipeline within the project are specified by Logipard configuration file - get to know about it in specific details: ✖ Logipard configuration file
After you are familiar enough with all these subjects (and the dependent subjects within), the reference on all the user facing items and interfaces can be found here: ✖ Reference
The central idea of Logipard is storage of the extracted documentation fragments in a single intermediate machine-readable database, which is then accessible to documentation generators, as well as for other uses, in a certain uniform manner.
Logipard does not enforce any particular back-end implementation or storage strategy (although some built-in is provided for quickstart and as an example) - much like everything else, it is subject for customization. What we define is the high-level data structure, its population logic and access methods that must be supported by the database implementation to fit into the intended Logipard pipeline. These definitions constitute the Freeform Documentation Object Model (FDOM).
The FDOM data structure explanation starts best with a visual example...
image
The model data consists of nodes. Each one can be considered as documentation container for a single entity within your domain. It is up to the project convention to agree what an 'entity' is. It can for example be:
  • a program class,
  • a class member,
  • a class method,
  • a chapter, or other named section,
  • a glossary item,
  • ...etc.
Basically everything that makes any sense as a standalone titled text fragment. Another option is to use nodes as extra (meta)data fields / markers / etc. to other nodes. Any domain specific semantics can be implemented on top of node relations provided by FDOM (see: ✖ Parent-member relation and names , ✖ Tagged-tag relation ).
In the example shown above, we have total of 11 nodes.
Each node has content associated with it - the actual text or whatever other documentation/data pieces. From FDOM perspective, the content is opaque (although its source is naturally text-based) and it is intentionally not involved into FDOM basic querying comprehension (see ✖ FDOM querying ). The content comprehension (whether it is a plain text, or markdown text, or a resource link, or metadata that needs special interpretation, or etc.), as well as possibly extending of the querying to involve usage of the content, is up to particular FDOM writing and reading users.
The nodes can have parent-member relation. Each node can have arbitrary number (including none) of member nodes, and is itself a member to exactly one parent node (except for the root node). No cycles are allowed, and exactly one root node exists in the model, so the model parent-member skeleton is a single component tree structure. Parent-member structure is also a basis for nodes naming space.
Every node has a short name, a sequence of characters that should be unique within sibling nodes (under same immediate parent), but doesn't have to be unique globally. (See ✖ Node short names on what sequences are the correct short names.) A node can therefore be uniquely identified by a full name - the short names of nodes that make up path to it from the root, joined with forward slashes (/-s). Thus, the node on our example picture shortnamed subNodeA-3-1 has full name of nodeA/subNodeA-3/subNodeA-3-1, as well as the node shortanmed %tag3 has full name of moreTags/%tag3.
Therefore, node's full name can be considered as a directory path under the root node (and so can also be referred to as path name or full path name). The root node itself is unnamed, and is intentionally assumed to have no valid short or full name.
A valid node short name is any non-empty, non-whitespace sequence of characters with exception of /-s, with one extra option: a sequence between single or double quotes is considered a single character, and can contain the characters normally disallowed. For example, spare/can is not a valid short name (although is a valid full name), and space time is not a valid name at all, but "spare/can" and 'space time' are both valid short names. Or, for another example, A/"B/C"/D is a three segment full name made of short names A, "B/C", D. Quotes must match, except for when inside other type of quotes.
Note that quotes are considered parts of names, and there are no escapes or whitespace compactions inside (that is, "shortName" is not the same as shortName and not the same as "shortName ").
Additionally, some short names are considered reserved and should be used with caveat considerations:
  • short names starting with # (e. g. #include, ##mp). That includes single #. These are reserved as private generated names. They can actually appear in the compiled model, but never as explicitly user specified short names.
  • short names containing :, {, } (e. g. a::b, {ARG}). :, { and } are reserved as delimiters for input syntax.
  • short names consisting only of dots (., .., ..., etc.). These are reserved for referring to current/previous/pre-previous/etc. path levels and can not be actual node names.
  • short name ~ (single tilde) - every occurrence of this short name resolves to a unique private generated short name and can not be an actual node name.
You can use single or double quotes to sort of work around this restriction ("#name", ".", "~", '{ARG}', 'a::b' are valid short names).
Some short names can be well-known in your domain's model and have meaning of a particular metadata attribute for the parent node. For example, node named %title can designate the parent's node title. Such shortnames are recommended to start with '%' character to emphasize their special role and simplify filtering them out of "usual" sub-nodes.
Additionally, nodes can have tagged-tag relation, that has no restrictions on directions and nodes allowed to link. (We will also use words model tags as a synonym, in contexts that require distinction from text markup tags.) In programming-like terms, this relation can be thought of as "weak" links, in contrast to "strong" links that form parent-member-ship and namespacing structure. In further description of the model, the fact a node is tagged with some tags can be denoted with separating each tag via hash: node #tag1 #tag2 ...
The primary use case of tagging is to mark a node with set of flags of your domain to let the FDOM users know that the documented object is (or is not) of some particular domain-specific type - e. g., a function, a structure, a query, etc. For every such flag, you introduce a specific tag (e. g. %function, %arg, etc.), each of which is also a FDOM node. Apart from reducing number of basic entities in the model, this approach allows the tag itself to be tagged and to have metadata sub-nodes, which enables creation of quite complex comprehensions.
Another use case is to make metadata attributes that hold a list of other nodes. You can create a node with a well-known short name tagged with all the nodes that are to be included into the list. For example, in node describing a class that extends some base classes/interfaces, a sub-node named %extends can be tagged with all of the nodes describing the base classes.
Non-existing nodes are considered mutually equivalent to null nodes - ones that have no non-blank content, are not tagged to any other node, have no tags, and no non-null member nodes. They are typically optimized out of actual storage.
Every piece of content, and every assigned tag, is associated with some source it comes from (i. e. a source code file), and only exists in the model by virtue of its presence in the source. The only legit method of creation of and updates to the FDOM is by re-extracting and re-compiling the source and replacement of any data associated with the source with the up-to-date one.
As a result of the update, a node can become a null-node - this is an ok situation.
The FDOM structure is designed in such a way that the order of processing each individual source has no query-relevant impact. No matter if you are constructing the FDOM from scratch with a certain set of sources, or from a part of this set and update the rest incrementally, possibly repeating as the source changes, you should end up with the same model (with a possible error of content/nodes list ordering, these are not query-relevant).
FDOM includes the concept of query - a specification that defines a read-only view of the stored data and outlines the interface through which the data should be accessed. The specific API form and underlying implementation of the query interface are left to individual implementations and are considered out of scope.
In essence, an FDOM query is a pipeline that takes a collection of nodes as input and produces another collection of nodes based on specific criteria. This pipeline can consist of either basic queries or a composite chain of sub-queries (query fragments). Queries can be executed all at once or applied incrementally, with intermediate states stored in the query context.
The core element of the FDOM data view is collection - a set of distinct nodes that contains no duplicates and no null nodes. This set can be empty, and it is inherently unordered. A collection can be explicitly specified by the user, or be obtained as result of a query.
Having a collection assumes that the user can access each individual node in it, at least by means of enumeration. Access to a node assumes ability to get all of its elements:
  • content (in implementation-specific way, so exact details of content access are opaque to FDOM query concept),
  • the set of member nodes (as FDOM collection),
  • the set of member node short names, and each individual member by its short name,
  • the set of tag nodes (as FDOM collection).
While it is recommended for API implementations to retain the original order of items from the same source during collection enumeration wherever possible, this behavior should not be relied upon. Generally, the order of items only becomes relevant at the final step, once all required collections have been obtained and the query scope has concluded. For this reason, the FDOM query concept intentionally omits any notion of sorting, delegating that responsibility to other layers of the API.
The query context represents the state of the query pipeline after each query fragment (or, at the start of the query, the initial state). It is associated with the current collection, which is the result of the last executed query fragment (or, in the initial state, the input collection provided by the user).
To simplify implementation optimizations, the FDOM query concept suggests that the current collection is only exposed to the user after context teardown - an explicit indication by the user that the query is complete and access to the final result is required. The query result is the current collection from its context at time of teardown.
There is a number of basic queries that can comprise a more complex query. FDOM does not suggest a particular syntax for the query DSL, so we'll express it in somewhat conventional notation - the way the items are actually specified on site depends on the particular API and implementation. A complex query is basically a list of basic queries, where each starts with output of the previous one as its input, the first one acts on initial current collection of the context, and the last one sets the new current collection. Our conventional notation for such query lists will be <query1> / <query2> / ... / <queryN>.
Some query related things to remind and to keep in mind:
  • collections (including query results) contain no null nodes and no duplicate nodes
  • collections are unordered, order of their elements on enumeration is implementation dependent. Although it is recommended for the implementation to keep enumeration order in explicitly specified collections (see ✖ Collection specification ) and declaration order of elements that come from the same source, it is on-opportunity advice, not a requirement, and is not always possible to satisfy. Precise ordering is a concern out of FDOM scope.
Not a query per se, but there are cases when you need to specify a collection in ad hoc manner (an initial one that a query starts on, or an auxiliary collection in addition to the current one). FDOM allows for the following ways to specify a collection:
  • direct list of nodes, generally by their full names: [node-fn-1, node-fn-2, ...] As the collection may be empty, but may not contain null nodes, any null nodes possibly referred in this list are effectively dropped from the specified collection (they won't be enumerated or considered in any way, and won't count towards size of the collection).
  • mixed list of nodes and collection specs, e. g.: [node-fn-1, node-fn-2, <coll-spec-1>, node-fn-3...] The extended version of list of nodes, the collection spec elements are treated as list of nodes obtained by expanding the corresponding collection; duplicate nodes are ignored (a node is only included in the collection once).
  • collections union: <coll-spec-1> + <coll-spec-2> + ... The collection that includes every node from every given collection.
  • collections intersection: <coll-spec-1> & <coll-spec-2> & ... The collection that only includes nodes that are found in all of the given collections.
  • collections subtraction: <coll-spec-1> - <coll-spec-2> - ... The collection that only includes nodes that are found in first collection of the list, but not in any of 2nd and the rest collections of the list.
  • reference by alias: SomeCollAlias An alias is a "bookmark" for a collection within the query context to be able to recall it and refer to it later. An alias is conventionally denoted by a valid FDOM short name.
  • API-specific representation: whatever way API enables the user to represent a complete collection object, whether the collection is directly user-specified, or is obtained via query, or via any other way.
The specification may be composite, e. g. BasicNodes + [advanced/%node1, advanced/%node3, MoreAdvancedNodes] + ((EvenMoreNodesA & EvenMoreNodesB) - [%node4]).
Note on aliases: they can be valid per-query or can be permament (shared between multiple independent queries). In any case, an alias is assumed unique per its scope, and not re-assigned to a different collection once set (behaviour in case of replacement of an alias is implementation-dependent). Name crossing for permament and per-query aliases is also discouraged.
Some queries need to specify a condition to check on a node potentially included into the result. FDOM allows for the following conditions:
  • Boolean constant: true if the condition always holds, false if the condition always fails.
  • isAnyOf <collection-spec>: the condition holds if the node is one of the given collection.
  • hasMembersThat <sub-condition>: the condition holds if the sub-condition holds for at least one of the node's non-null members.
  • hasMembersNamed <regular expression>: the condition holds if the node has at least one non-null member with shortname that matches the given regular expression. Shortcut for hasMembersThat (named <regular expression>) (see below) that can have potentially optimized implementation.
  • hasAnyOfTags <collection-spec>: the condition holds if the node has at least one tag from the given collection of tag nodes.
  • hasAllOfTags <collection-spec>: the condition holds if the node has all tags from the given collection of tag nodes.
  • hasParentThat <sub-condition>: the condition holds if the sub-condition holds for the node's immediate parent.
  • named <regular expression>: the condition holds if the node's shortname matches the given regular expression.
  • and <list of sub-conditions>: the condition holds if all of the sub-conditions in the given list hold.
  • or <list of sub-conditions>: the condition holds if at least one of the sub-conditions in the given list holds.
  • not <sub-condition>: the condition holds if the sub-condition fails, and fails if the sub-condition holds.
  • reference by alias SomeConditionAlias: a condition can be marked by a shortcut alias (conventionally denoted by a valid FDOM short name) that can be re-used across the query or other queries in this context.
Note there are no conditions that operate on content. FDOM queries are purely structural.
alias ShortName - set an local alias to current collection (it only is in effect until the current query teardown). Does not change the current collection, just sets an alias to it that can be used in later parts of the query.
All the alias shortnames, both local and permament, are assumed unique per context. Re-use of an ID is discouraged (local ID can be re-used only in next query), and the behaviour in this case is implementation specific.
with <collection-spec> - replace the current collection with an explicitly specified one (see ✖ Collection specification ). Useful for sub-queries that are based on some well-known sets of nodes.
membersThat <condition-spec> [on <collection-spec>] [recursive] - take member nodes from each element of the current collection, such that each member satisfies the given condition ( ✖ Condition specification ), and replace the current collection with set of such nodes. An explicitly specified collection can optionally be searched in instead of the current one. This query can be recursive ( ✖ Recursive queries ).
For example, we have node and its members:
  • node
  • node/memberA
  • node/memberB
  • node/memberC
The query starts with collection:
  • node
Then query membersThat (not (named /^.*A/)) will yield the following collection:
  • node/memberB
  • node/memberC
itemsThat <condition-spec> [on <collection-spec>] [recursive] - filter the elements from the current collection that satisfy the given condition ( ✖ Condition specification ), and replace the current collection with set of these nodes. An explicitly specified collection can optionally be searched in instead of the current one. This query can be recursive ( ✖ Recursive queries ).
For example, the query starts on a collection of nodes:
  • node1/memberA
  • node2/memberB
  • node3/memberC
Then query itemsThat (not (named /^.*A/)) will yield the following collection:
  • node2/memberB
  • node3/memberC
tagsThat <condition-spec> [on <collection-spec>] [recursive] - get tag nodes of each element of the collection, such that each tag satisfies the given condition ( ✖ Condition specification ), and replace the current collection with set of these nodes. An explicitly specified collection can optionally be searched in instead of the current one. This query can be recursive ( ✖ Recursive queries ). Although this query can be used on any collection, its typical use is on single elements (single element collection).
inMembersThat <condition-spec> [on <collection-spec>] [recursive] query <basic-query-list> - take member nodes from each element of the current collection, such that each member satisfies the given condition ( ✖ Condition specification ), then replace the current collection with the result of the given query list on collection of these (member) nodes. An explicitly specified collection of initial nodes can optionally be set to start the sub-query on instead of the current one. This query can be recursive ( ✖ Recursive queries ).
For example, we have node, its members, and its sub-members:
  • node
  • node/memberA
  • node/memberB
  • node/memberB/%data
  • node/memberC
  • node/other/%data
The query starts with collection:
  • node
Then query inMembersThat (named /%^member/) query (hasMembersNamed /^%data$/) delivers the following collection:
  • node/memberB/%data
(The node/other has member %data, but does not pass the inMembersThat condition, so doesn't get into the collection of subjects for sub-query search.)
Sub-queries use same local collection aliases namespace as the main query.
inTagsThat <condition-spec> [on <collection-spec>] [recursive] query <basic-query-list> - take tag nodes from each element of the current collection, such that each tag satisfies the given condition ( ✖ Condition specification ), then replace the current collection with the result of the given query list on collection of these (tag) nodes. An explicitly specified collection of initial nodes can optionally be set to start the sub-query on instead of the current one. This query can be recursive ( ✖ Recursive queries ).
For example, we have the following tag nodes and their members:
  • %tag1
  • %tag2
  • %tag2/%isLangObject
  • %tag2/%subInfo
  • %tag3
  • %tag3/%subInfo
and the following set of nodes tagged as follows:
  • node1 #%tag1
  • node2 #%tag1 #%tag2
  • node3 #%tag3 #%tag2
  • node4 #%tag4
The query is inTagsThat (hasMembersNamed /^%isLangObject$/) query membersThat (named /^%subInfo$/).
Done on either of the following collections:
  • node1
  • node2
  • node3
it delivers the collection:
  • %tag2/%subInfo
Done on collection:
  • node1
  • node4
it yields empty collection.
Sub-queries use same local collection aliases namespace as the main query.
inItemsThat <condition-spec> [on <collection-spec>] [recursive] query <basic-query-list> - take each element of the current collection that satisfies the given condition ( ✖ Condition specification ), then replace the current collection with the result of the given query list on collection of these (collection entry) nodes. An explicitly specified collection of initial nodes can optionally be set to start the sub-query on instead of the current one. This query can be recursive ( ✖ Recursive queries ).
For example, we have the following nodes and their members:
  • node1
  • node1/member1
  • node1/member2
  • node1/member2/%flagged
  • node2
  • node2/member2
  • node2/member2/%flagged
  • node3/member1
  • node3/member1/%flagged
The query is inItemsThat (hasMembersNamed /^member1$/) query membersThat (haveMembersNamed /^%flagged$/). Done on the collection:
  • node1
  • node2
  • node3
it yields the collection:
  • node1/member2
  • node3/member1
Sub-queries use same local collection aliases namespace as the main query.
subtractQuery [on <collection-spec>] <basic-query-list> - perform the given query list and subtract the result from the current collection. The collection the sub-query is performed is by default the subtractQuery's initial collection itself, but an explicitly specified collection can be optionally given instead.
For example, we have the following nodes and their members:
  • node1
  • node2
  • node2/%flag
  • node3
The query is subtractQuery itemsThat (haveMembersNamed /^%flag$/). Done on the collection:
  • node1
  • node2
  • node3
it yields the collection:
  • node1
  • node3
Sub-queries use same local collection aliases namespace as the main query.
unionQuery [on <collection-spec>] <basic-query-list> - perform the given query list and union the result with the current collection. The collection the sub-query is performed is by default the unionQuery's initial collection itself, but an explicitly specified collection can be optionally given instead.
For example, we have the following nodes and their members:
  • node1
  • node2
  • node2/member
  • node3
The query is unionQuery membersThat (true). Done on the collection:
  • node1
  • node2
  • node3
it yields the collection:
  • node1
  • node2
  • node2/member
  • node3
Sub-queries use same local collection aliases namespace as the main query.
intersectQuery [on <collection-spec>] <basic-query-list> - perform the given query list and intersect the result with the current collection. The collection the sub-query is performed is by default the intersectQuery's initial collection itself, but an explicitly specified collection can be optionally given instead.
For example, we have the following nodes and their members:
  • node1
  • node2
  • node2/%flag
  • node3
The query is intersectQuery itemsThat (haveMembersNamed /^%flag$/). Done on the collection:
  • node1
  • node2
  • node3
it yields the collection:
  • node2
Sub-queries use same local collection aliases namespace as the main query.
sideQuery [on <collection-spec>] <basic-query-list> - perform the given query list, but leave the current collection unchanged. This query only makes sense if the query ends in setting a local alias. The collection the sub-query is performed is by default the sideQuery's initial collection itself, but an explicitly specified collection can be optionally given instead.
Intended use case for this query is to set an alias in an inline way without breaking flow of the "primary" query. For example, we have node tagged:
  • tagsList #tag1 #tag2
and are given the collection of nodes tagged as:
  • node1 #tag1
  • node2 #tag2
  • node3 #tag3
Then the query "filter nodes tagged by some of tags from tagsList" will be: (sideQuery on [tagsList] allTagsThat (true) / alias TAGS) / allItemsThat (hasAnyOfTags TAGS).
Applied to collection:
  • node1
  • node2
  • node3
it will yield the collection:
  • node1
  • node2
Sub-queries use same local collection aliases namespace as the main query.
Some queries can optionally be recursive. That means, after replacing current collection with the matching set of nodes, the same query is applied to the resulting collection, and the outcome is added to the result (as per union operation), then the same is done to the new nodes, and so on, until the resulting collection no longer changes. This is useful for queries that pull nodes out by some transitive relations.
For example, let us have nodes and their members tagged as follows:
  • classA
  • classB
  • classB/%extends #classA
  • classC
  • classD
  • classD/%extends #classB #classC
Given starting collection of some single 'class' node, e. g.:
  • classD
we could use the following query to fetch the collection of nodes for classes it extends: inMembersThat (named /^%extends$/) allTagsThat (true), but this query will only return nodes for the "directly" extended classes:
  • classB
  • classC
In order to pull the whole tree of extended classes in-depth, we need the recursive version of the query: inMembersThat (named /^%extends$/) recursive allTagsThat (true). Then we will get the expected:
  • classA (recursively queried via classB)
  • classB
  • classC
Logipard documentation generation occurs in three stages:
  • extraction of annotations into uniform input files understood by the FDOM compiler
  • compilation from input files into actual FDOM
  • generation of whatever documentation artifacts are needed based on the compiled FDOM
When you invoke Logipard CLI, it runs all of the stages in the order listed above. Each stage relies on artifacts generated by the previous one, but, as these artifacts are generally retained, you may only (re-)run some stages by explicitly specifying the ones you need.
Run extraction stage only:
lp-cli --extract <config-file>
Run compilation stage only:
lp-cli --compile <config-file>
Run generation stage only:
lp-cli --generate <config-file>
Run compilation and generation stage:
lp-cli --compile --generate <config-file>
...etc.
At this stage the documentation annotations are extracted from the source files (or other types of sources) and put into Logipard input files of a certain uniformed format. This job is done by plugins named extraction readers.
Although the expected format of this stage's output (and input for the compilation stage) is fixed, the annotations format itself, as well as their decoding, is up to the extraction readers. It is possible for example to make a reader from javadoc or doxygen comments, machine-readable output of some compiler, or even from the code itself. It is also an option to pass the same sources through several extractors, letting each one to extract its own recognized part of the input.
Although it is technically possible to extract inputs for several independent FDOMs, in general it is expected that all extractors of a project will prepare input for the same single FDOM, and that they will follow some consistent common convention when organizing the extracted input files. For handling independent FDOMs, a better practice is to have separate project configuration files.
Extraction stage is controlled by ✖ lp-extract entry of the configuration file.
Extraction reader must be implemented in compliance with the corresponding interface: ✖ Extraction reader interface .
Logipard comes with some built-in extraction readers: ✖ Built-in extraction readers
Some ready-to-use extraction readers, come in Logipard package both as a quickstart boilerplate and as examples.
Reader that extracts LP input from single-line comments or plain text files, with minimum additional processing. Supports single-line comments in generic C-like (//), shell-like (#), lua/SQL like (--) languages, and also plaintext "language". Also serves as an example implementation of extraction reader. Usage of this reader for an extract-stage item is enabled by writer: "${LP_HOME}/lpxread-basic" $ in ( ✖ reader ).
The contiguous run of lines with single-line comments, starting with #LP or -#LP, is assumed to be a LP input belonging to the single #LP tag (the -#LP runs are ignored), e. g.:
code code // non-LP comment
code //#LP: an LP tag
code code code // continued LP tag
code //#LP: another LP tag
code code code // continued another LP tag
code
code // again a non-LP comment (line with no comment breaks contiguity)
code code
code code code // once again a non-LP comment
code //#LP: third LP tag <#LP: fully written inline tags, including digressions, are allowed#> as well
code code code //-#LP commented out LP tag
code //#LP: 4th LP tag
results in the following extracted input:
<#LP: an LP tag
 continued LP tag#>
<#LP: another LP tag
 continued another LP tag#>
<#LP: third LP tag <#LP: fully written inline tags, including digressions, are allowed#> as well#>
<#LP: 4th LP tag#>
Additionally, charset specification is allowed by using comment like //#charset utf-8 (only once per file, "charset" keyword must be in lowercase). What is considered a single-line comment, depends on the source type specified for this extraction work item (see ✖ lpxread-basic specific config (lp-extract job item) ).
For plain text "language", every line is treated as a single-line comment. As a side effect of this convention, you may need to insert an #-LP (or #LP-) comment line to mark termination of LP tag started by #LP tag: .... Additionally, in a quite specific case when you have a code fragment that contains LP markup, you should place it between #LP~delimiter~ delimiter lines to avoid production of incorrect output. E. g.:
#LP~x~
```
#LP ./example: this is an example of a code that contains a verbatim LP markup <#~~ and an escaped verbatim run ~~#>
```
#LP~x~
Everything within the #LP~...~ lines will be transferred to extracted input exactly verbatim, although the whole fragment still has to contain the correct LP markup. So only use this way of delimitation for code fragments, and with caution.
Member named lpxread-basic with lpxread-basic specific configuration should be added to the extraction job item that uses lpxread-basic, including the sub-members as described...
Members
Name
Description
✖ srcType
The source type of the inFiles...
For example:
{
	"inFiles": ["**/*.js"],
	"excludeInFiles": [],
	"outDir": "...",
	"reader": "logipard/lpxread-basic",
	"lpxread-basic": {
		"srcType": "generic-c-like"
	}
}
Members (detailed)
The source type of the inFiles...
Can be either of:
  • generic-c-like: C-like languages allowing single-line comments starting from // (C family, Java family, JS, PHP, etc.)
  • generic-sh-like: languages allowing single-line comments starting from # (sh family, python, perl, PHP, etc.)
  • lua-sql: languages allowing single-line comments starting from -- (Lua & SQL are most known ones)
  • lp-text: plaintext-based file, where every line is considered a single-line comment
At this stage, the extracted input will be compiled into an actual FDOM representation. Logipard does not enforce specific FDOM representation and storage mechanism. Instead, it feeds the model creation and update commands to the plugins named compiled model writers, and it is up to them how to represent and actually store the model - in a local file of certain format, in a DB backend, or using some online service. Complementary to them, there is also notion of compiled model readers, which are assumed to read the FDOM from the storage created/updated by matching compiled model writers and to expose it for querying by a user (typically by a generator at the generation stage).
Compilation stage is controlled by ✖ lp-compile entry of the configuration file.
Compiled model writer must be implemented in compliance with the corresponding interface: ✖ Compilation writer interface .
Compiled model reader implementation is up to the customizer, although it is recommended to align with FDOM querying concept: ✖ FDOM querying .
Logipard comes with some built-in compiled model writers: ✖ Built-in compiled model writers and corresponding model readers: ✖ Built-in compiled model readers
Some ready-to-use compiled model writers, come in Logipard package both as a quickstart boilerplate and as examples.
Writer of model into a JSON file with the schema as given below. The writer keeps the whole intermediate model in the memory, and writes it back in one whole lump, so may be not suitable for really massive amounts of documentation and/or with very frequent updates.
See description of the produced JSON schema, writer configuration, and usage details: ✖ ${LP_HOME}/lpcwrite-basic-json
The compiled model readers corresponding to the built-in compiled model writers.
This reader is able to read FDOM from JSON file compiled by ✖ ${LP_HOME}/lpcwrite-basic-json: Writer of FDOM into JSON file . It is internally used by ✖  ${LP_HOME}/lpgwrite-example: An example generator of single-page HTML/MD documentation , but is also available for standalone use by your own generators (or for whatever other purposes). It follows the recommended model reader interface outline: ✖ Suggested compiled FDOM reader interface .
See description of interface and usage: ✖ logipard/lpgread-basic-json.js
At this stage, the final artifacts is expected to be produced from the compiled and stored FDOM. This job is delegated to plugins named generation writers (or, in more general scope, simply generators). A generator can do things only limited to user's resourcefulness. Most typically we would expect production of some end-user documentation item, but it can as well be an update in a wiki or an issue tracker, or some code or config generation for a deliverable, or an intermediate artifact for next stage of some greater process spanning beyond Logipard.
Since job items of the stage are ordered, it is a totally reasonable option for a generation writer to prepare some input for another generation writer that is to run at some later point within the remaining part of the stage.
Since generator's work is to be based on FDOM, it almost always relies on compiled model readers to fetch the data. A generation writer typically supports a certain set of compiled model readers, and has to be aware of the corresponding FDOM storage settings via a certain part of configuration.
Generation stage is controlled by ✖ lp-generate entry of the configuration file.
Generation writer must be implemented in compliance with the corresponding interface: ✖ Generation writer interface .
Logipard comes with some built-in generation writers: ✖ Built-in generation writers
Some ready-to-use generation writers, come in Logipard package both as a quickstart boilerplate and as examples.
This generation writer produces human readable documentation page extracted and structured according to a document program. See description of the writer configuration and in-depth usage details: ✖ ${LP_HOME}/lpgwrite-example . Usage of this generator for a generate-stage item is enabled by writer: "${LP_HOME}/lpgwrite-example" $ in ✖ writer .
The generator assists in language translation of JSON-backed FDOM files compiled by ✖ ${LP_HOME}/lpcwrite-basic-json: Writer of FDOM into JSON file . See description of the writer configuration and in-depth usage details: ✖ ${LP_HOME}/lpgwrite-i18n-assist . Usage of this generator for a generate-stage item is enabled by writer: "${LP_HOME}/lpgwrite-i18n-assist" $ in ✖ writer .
Project configuration is passed to Logipard in a JSON formatted (more exactly, ✖ LPSON formatted) configuration file of the following structure:
{
	// optional (note there is exactly single space after '+')
	"+ config": {
		...
	},
	// mandatory, configuration for extract stage
	"lp-extract": {
		// optional
		"+ config": {
			...
		},
		// mandatory
		"items": [
			{
				"SKIP": bool, // optional, non-false value comments out this item
				"inRootDir": string-path, // optional, defaults to project root dir (see below)
				"inFiles": [ ...strings-file-globs ], // mandatory, must be relative to the dir specified in inRootDir
				"excludeInFiles": [ ...strings-file-globs ], // optional, must be relative to the dir specified in inRootDir
				"forLPInclude": bool, // optional, defaults to false
				"outDir": string-path, // mandatory
				"reader": string-path, // mandatory
				... // no other parameters recognized by Logipard and are extra, the reader can require its own specific parameters
			},
			... // zero, one or more items
		]
	},
	// mandatory, configuration for compile stage
	"lp-compile": {
		// optional
		"+ config": {
			...
		},
		// mandatory
		"items": [
			{
				"SKIP": bool, // optional, non-false value comments out this item
				"inRootDir": string-path, // mandatory, 
				"outFile": string-path,
				"writer": string-path,
			},
			... // zero, one or more items
		]
	},
	// mandatory, configuration for generate stage
	"lp-generate": {
		// optional
		"+ config": {
		},
		// mandatory
		"items": [
			{
				"SKIP": bool, // optional, non-false value comments out this item
				"inFile": string-path,
				"writer": string-path,
				...
			},
			... // zero, one or more items
		]
	}
}
The config file also works as an anchor for the project root directory. It is the directory the config file is placed in. All relative files and directory paths are assumed relative to the project root, unless stated otherwise.
The given set of members is the bare minimum, which are recognized by Logipard itself. However, there can be additional ones - user tools can make use of them. The configuration unit passed to a tool is based off an object from a single entry from "items", but it is also merged with object "+ config" from the corresponding tool's stage, and additionally merged with "+ config" from the root level (in the order global "+ config" -> stage "+ config" -> item). The merging is shallow per-member appending at the object's root level. If case of member name collision, the latter object member replaces the former object's member. However, there is a way to override this behaviour for array or an object type member: if the member's expected name is "id", then add member named "+ id" to the "+ config"(s) - then, the resulting "id" member will contain the sub-members from "+ id" from the "+ config" appended before the ones given by the item's "id".
I. e.:
{
	"+ config": {
		...
		"a": [1, 2],
		"+ b": [3, 4],
		"c": [5, 6],
		"+ d": [7, 8],
		...
	},
	"lp-...": {
		"+ config": {
			...
			"a": [9, 10],
			"b": [11, 12],
			"+ c": [13, 14],
			"+ d": [15, 16],
			...
		},
		"items": [
			...
			{
				// "a", "b", "c", "d" are unspecified, in the actual config item they will be:
				// "a": [9, 10]
				// "b": [3, 4, 11, 12]
				// "c": [13, 14]
				// "d": [7, 8, 15, 16]
			},
			{
				...
				"a": ["A", "B"], // actual "a": ["A", "B"]
				"b": ["A", "B"], // actual "b": ["A", "B"]
				"c": ["A", "B"], // actual "c": [13, 14, "A", "B"]
				"d": ["A", "B"], // actual "d": [7, 8, 15, 16, "A", "B"]
				...
			}
		]
	}
}
Members
Name
Description
✖ "+ config"
Configuration parameters shared by all the job items in all the stages. Appended to each item specific configuration before the item's own configuration and the stage specific "+ config".
✖ lp-extract
Configuration parameters for the job items in the extract stage.
✖ lp-compile
Configuration parameters for the job items in the compile stage.
✖ lp-generate
Configuration parameters for the job items in the generate stage.
Members (detailed)
Configuration parameters shared by all the job items in all the stages. Appended to each item specific configuration before the item's own configuration and the stage specific "+ config".
Configuration parameters for the job items in the extract stage.
Members
Name
Description
✖ "+ config"
Configuration items shared by all the job items in the extract stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
✖ items[]
Array of configurations specifying each job item in the extract stage.
Members (detailed)
Configuration items shared by all the job items in the extract stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
Array of configurations specifying each job item in the extract stage.
Members
Name
Description
✖ SKIP
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
✖ inRootDir
Root directory for input files lookup, defaults to project root dir.
✖ inFiles
A string, or array of strings, with glob filename templates - specifies the set of input files that fall under this item. The paths in templates are relative to the ✖ inRootDir .
✖ excludeInFiles
A string, or array of strings, with glob filename templates - specifies the set of files to exclude from ✖ inFiles . The paths in templates are relative to the ✖ inRootDir . Optional.
✖ outDir
A string, path to the directory where the extracted documentation model input will be placed. The extraction output directory is assumed transient and should be added to VCS ignore list. Note that same source file can be picked by multiple extraction job items, but if its extracted input from different jobs ends up under same outDir-s then later jobs will overwrite product of earlier ones - you should consider this ahead and take care their output locations did not conflict.
✖ forLPInclude
Boolean, if true then input extractions by this job item will be saved as module files eligible for inclusion via LP-inc/LP-include (see ✖ Including module files ). Optional, defaults to false.
✖ reader
String. Path to the extraction reader's JS file, relative to project root (unless absolute). The extraction reader is expected to comply with ✖ Extraction reader interface . Logipard contains some built-in extraction readers: ✖ Built-in extraction readers
Members (detailed)
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
Root directory for input files lookup, defaults to project root dir.
A string, or array of strings, with glob filename templates - specifies the set of input files that fall under this item. The paths in templates are relative to the ✖ inRootDir .
A string, or array of strings, with glob filename templates - specifies the set of files to exclude from ✖ inFiles . The paths in templates are relative to the ✖ inRootDir . Optional.
A string, path to the directory where the extracted documentation model input will be placed. The extraction output directory is assumed transient and should be added to VCS ignore list. Note that same source file can be picked by multiple extraction job items, but if its extracted input from different jobs ends up under same outDir-s then later jobs will overwrite product of earlier ones - you should consider this ahead and take care their output locations did not conflict.
Boolean, if true then input extractions by this job item will be saved as module files eligible for inclusion via LP-inc/LP-include (see ✖ Including module files ). Optional, defaults to false.
Typically the root is the same for all items of the same project, but different item groups can go under different subdirectories, e. g. you may want to specify <root>/src for extractions from source code files and <root>/txt for extractions from text files.
String. Path to the extraction reader's JS file, relative to project root (unless absolute). The extraction reader is expected to comply with ✖ Extraction reader interface . Logipard contains some built-in extraction readers: ✖ Built-in extraction readers
Configuration parameters for the job items in the compile stage.
Members
Name
Description
✖ "+ config"
Configuration items shared by all the job items in the compile stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
✖ items[]
Array of configurations specifying each job item in the compile stage.
Members (detailed)
Configuration items shared by all the job items in the compile stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
Array of configurations specifying each job item in the compile stage.
Members
Name
Description
✖ SKIP
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
✖ inRootDir
Path to the root directory of the model input extracted at the extract stage. In most cases, root is the same as ✖ outDir .
✖ lpIncLookupDirName
Name for directory to use for cascading LP-inc/LP-include lookup.
✖ writer
String. Path to the compilation model writer's JS. The compilation writer is expected to comply with ✖ Compilation writer interface . Logipard contains some built-in compilation writers: ✖ Built-in compiled model writers
Members (detailed)
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
Path to the root directory of the model input extracted at the extract stage. In most cases, root is the same as ✖ outDir .
Name for directory to use for cascading LP-inc/LP-include lookup.
When using <#LP-include filename#> in a LP input file, where filename is non-absolute and not explicitly local (i. e. is not starting from . or ..), then lookup is done as ./<value-of-lpIncLookupDirName>/filename, if not found there then as ../<value-of-lpIncLookupDirName>/filename, etc. upwards (but not higher than inRootDir).
String. Path to the compilation model writer's JS. The compilation writer is expected to comply with ✖ Compilation writer interface . Logipard contains some built-in compilation writers: ✖ Built-in compiled model writers
Configuration parameters for the job items in the generate stage.
Members
Name
Description
✖ "+ config"
Configuration items shared by all the job items in the generate stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
✖ items[]
Array of configurations specifying each job item in the generate stage.
Members (detailed)
Configuration items shared by all the job items in the generate stage. Appended to each item specific configuration before the item's own configuration, and after the global ✖ "+ config" .
Array of configurations specifying each job item in the generate stage.
Members
Name
Description
✖ SKIP
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
✖ writer
String. Path to the generation writer's (generator's) JS. The generation writer is expected to comply with ✖ Generation writer interface . Logipard contains some built-in generation writers: ✖ Built-in generation writers
Members (detailed)
Bool. Non-false value tells Logipard to skip this item. Use to comment out temporarily disabled items. Optional, defaults to false.
String. Path to the generation writer's (generator's) JS. The generation writer is expected to comply with ✖ Generation writer interface . Logipard contains some built-in generation writers: ✖ Built-in generation writers
The reference on the items and interfaces that a Logipard user may need to deal with at one point or another.
API interfaces and extra configuration items for built-in plugins.
Extraction reader is invoked when processing an extact job item. Its purpose is to parse source file data and to return the extracted annotations in LP input format ( ✖ Logipard Input File Format ). It is specified by reader field of an extract job item ( ✖ reader ). Extraction reader must be implemented as CommonJS module that exposes the following interface...
Members
Name
Description
✖ async .parseInput({ buffer, itemConfig, filePath })
Parse the file content, supplied as Node.JS Buffer, and return the extraction result in LP input format, as joint single string.
The interface must be exposed by reader module via module.exports similar to:
exports.parseInput = async function parseInput({ buffer, itemConfig, filePath }) { ... }
Members (detailed)
Parse the file content, supplied as Node.JS Buffer, and return the extraction result in LP input format, as joint single string.
Arguments
Name
Description
✖ buffer
Buffer, the input file supplied in plain binary form. Dealing with encoding is up to the reader.
✖ itemConfig
dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The reader can read all members of the item config object, but it is a good style to keep reader-specific configuration under a member sub-object named after the reader.
✖ filePath
string, the path to the file (project root agnostic, ready for standalone use in fs or path). Can be used for reference if information from the file alone is not sufficient for the reader's purposes.
Returns:
string, expected to contain the extraction in LP input format ( ✖ Logipard Input File Format ).
Errors:
The parseInput can throw an error to indicate an extraction failure.
Arguments (detailed)
Buffer, the input file supplied in plain binary form. Dealing with encoding is up to the reader.
dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The reader can read all members of the item config object, but it is a good style to keep reader-specific configuration under a member sub-object named after the reader.
string, the path to the file (project root agnostic, ready for standalone use in fs or path). Can be used for reference if information from the file alone is not sufficient for the reader's purposes.
Methods
async .parseInput({ buffer, itemConfig, filePath })
The auxiliary API exposed to the compile stage writer callback ✖  async .processCustomTag({ modelOutput, targetNodeName, tagName, toolkit, sourceFile }) , helps with handling LP specific comprehensions in case if the custom tag content is not a terminal text node and is assumed to contain nested LP markup.
Members
Name
Description
✖ .lpNameRegexp(bounds,flags)
Get RegExp for matching a string that fits as a LP item name. On success, the [0] of the match is the name string.
✖ .parseName(nameString)
Parse LP name string into array of name fragments.
✖ .currentScopeNodeName
Get full unaliased name of the currently scoped node (as array of string name fragments). Read-only.
✖ .resolveParsedName(parsedName)
Get node full FDOM name of a node by a parsed name array (obtained via ✖ .parseName(nameString) ). Useful when a custom tag is assumed to contain a FDOM name, and the processor needs to resolve it by the same rules as <#ref name#> in this scope.
✖ .items[]
The array of items contained in the tag, each element is either string (content) or a non-string object (nested tag, which should be processed via ✖ async .processTag(tagItem) ).
✖ async .processTag(tagItem)
Process the (nested) tag item, as LP would if encountered this tag normally inline.
✖ .text
The tag content as single string. Assuming no embedded tags exist in the content, otherwise null.
Members (detailed)
Get RegExp for matching a string that fits as a LP item name. On success, the [0] of the match is the name string.
Arguments
Name
Description
✖ bounds
String, optional, default '', can also be '^', '$', '^$'. Specifies the limit assertions to include into the regexp. If it contains ^, the ^ is added to start of the regexp. If it contains $, the $ is added to end of the regexp.
✖ flags
String, optional, default 'g'. Set of regexp flags to add.
Returns:
The LP name matching RegExp object
To parse the name into further details, use ✖ .parseName(nameString) .
Arguments (detailed)
String, optional, default '', can also be '^', '$', '^$'. Specifies the limit assertions to include into the regexp. If it contains ^, the ^ is added to start of the regexp. If it contains $, the $ is added to end of the regexp.
String, optional, default 'g'. Set of regexp flags to add.
Parse LP name string into array of name fragments.
Arguments
Name
Description
✖ nameString
The source name, string
Returns:
Array of the parsed name fragments
Arguments (detailed)
The source name, string
Get full unaliased name of the currently scoped node (as array of string name fragments). Read-only.
Get node full FDOM name of a node by a parsed name array (obtained via ✖ .parseName(nameString) ). Useful when a custom tag is assumed to contain a FDOM name, and the processor needs to resolve it by the same rules as <#ref name#> in this scope.
Arguments
Name
Description
✖ parsedName
Array, the parsed name as returned by ✖ .parseName(nameString) .
Returns:
String, the full FDOM name of the node.
Arguments (detailed)
Array, the parsed name as returned by ✖ .parseName(nameString) .
The array of items contained in the tag, each element is either string (content) or a non-string object (nested tag, which should be processed via ✖ async .processTag(tagItem) ).
The tag object is assumed opaque and only usable to pass for processTag, since the writer does not have much context to do anything reasonable with it anyway.
Process the (nested) tag item, as LP would if encountered this tag normally inline.
Arguments
Name
Description
✖ tagItem
The tag item object, as obtained from the ✖ .items[] array.
Note that sub-processing the tag is assumed opaque to the user of toolkit (that is, the outer tag's currently running ✖  async .processCustomTag({ modelOutput, targetNodeName, tagName, toolkit, sourceFile }) ), and the user's toolkit object should not be used by/exposed to any callback from the inside the sub-processing.
Arguments (detailed)
The tag item object, as obtained from the ✖ .items[] array.
The tag content as single string. Assuming no embedded tags exist in the content, otherwise null.
Methods
.lpNameRegexp(bounds,flags)
.parseName(nameString)
.resolveParsedName(parsedName)
async .processTag(tagItem)
Compilation writer is invoked when processing a compile job item. Its purpose is to accept FDOM construction/update commands and, based on them, construct/update the corresponding compiled FDOM representation. It is specified by writer field of a compile job item ( ✖ writer ). Compilation writer must be implemented as CommonJS module that exposes the following interface...
Members
Name
Description
✖  async .processCustomTag({ modelOutput, targetNodeName, tagName, toolkit, sourceFile })
Process a custom inline tag within the specified target node's content. Interpretation of the tag is up to the writer: it may be appending of some model representation specific type of content, or some adjustments to the content output process, etc.
✖ async .openModelOutput({ itemConfig, workDir })
Initialize the compiled model storage, or open the existing one for update.
✖ async .closeModelOutput({ modelOutput })
Finalize the model output and invalidate the handle. A writer will be open and closed exactly once per the compile job item, we can call it model update session.
✖ async .invalidateSourceFile({ modelOutput, sourceFile, newDependencies })
Invalidate the given source file. All content and tag-ons added from this source file ( ✖ async .appendContent({ modelOutput, targetNodeName, content, sourceFile }) , ✖ async .tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile }) ) should be removed from the storage or archived, as they are going to be replaced by a newer version of the input. Note that source file here, as well as in other methods, means LP input source file created at ( ✖ Extraction stage ), not the user-facing annotation source file(s), so it will contain .lpinput extension and will be located at the path according to the corresponding extraction job's ✖ outDir .
✖ async .appendContent({ modelOutput, targetNodeName, content, sourceFile })
Append content to the specified target node. Only text content is added this way, for other content components there are other methods.
✖ async .tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile })
Tag a target node with a specific tag node. Can also be worded as "tag (apply) the specific tag node on a given target node". That is, tagNodeName node will be added to list of targetNodeName node's tags.
✖  async .appendRef({ modelOutput, targetNodeName, refNodeName, refText, sourceFile })
Append an inline reference to the specified target node's content.
The interface must be exposed by writer module via module.exports similar to:
exports.openModelOutput = async function openModelOutput({ itemConfig, workDir }) { ... }
exports.closeModelOutput = async function closeModelOutput({ modelOutput }) { ... }
exports.invalidateSourceFile = async function invalidateSourceFile({ modelOutput, sourceFile, newDependencies }) { ... }
exports.appendContent = async function appendContent({ modelOutput, targetNodeName, content, sourceFile }) { ... }
exports.tagTo = async function tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile }) { ... }
exports.appendRef = async function appendRef({ modelOutput, targetNodeName, refNodeName, refText, sourceFile }) { ... }
exports.processCustomTag = async function processCustomTag({ modelOutput, targetNodeName, tagName, toolkit, sourceFile }) { ... }
Members (detailed)
Process a custom inline tag within the specified target node's content. Interpretation of the tag is up to the writer: it may be appending of some model representation specific type of content, or some adjustments to the content output process, etc.
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
✖ targetNodeName
String, the full FDOM name of the target node where the custom tag was encountered.
✖ tagName
String, the name of the custom tag.
✖ toolkit
Object, a set of utility functions provided for custom tag processing. See ( ✖ Compile stage writer toolkit for custom tag processor ).
✖ sourceFile
String, the path to the source file where this custom tag originates.
Returns:
none
Errors:
The processCustomTag can throw an error.
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
String, the full FDOM name of the target node where the custom tag was encountered.
String, the name of the custom tag.
Object, a set of utility functions provided for custom tag processing. See ( ✖ Compile stage writer toolkit for custom tag processor ).
String, the path to the source file where this custom tag originates.
Initialize the compiled model storage, or open the existing one for update.
Arguments
Name
Description
✖ itemConfig
dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The writer can read all members of the item config object, but it is a good style to keep writer-specific configuration under a member sub-object named after the writer.
✖ workDir
string, the path to project root directory, ready for standalone use in fs or path. It is useful if the writer's configuration must contain any file/directory paths that should be project root relative.
Returns:
model output handle, an opaque object that will be used as the model handle and passed back to other writer methods.
Errors:
The openModelOutput can throw an error.
Arguments (detailed)
dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The writer can read all members of the item config object, but it is a good style to keep writer-specific configuration under a member sub-object named after the writer.
string, the path to project root directory, ready for standalone use in fs or path. It is useful if the writer's configuration must contain any file/directory paths that should be project root relative.
Finalize the model output and invalidate the handle. A writer will be open and closed exactly once per the compile job item, we can call it model update session.
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ). Assumed no longer valid after this call.
Returns:
none
Errors:
The closeModelOutput can throw an error.
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ). Assumed no longer valid after this call.
Invalidate the given source file. All content and tag-ons added from this source file ( ✖ async .appendContent({ modelOutput, targetNodeName, content, sourceFile }) , ✖ async .tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile }) ) should be removed from the storage or archived, as they are going to be replaced by a newer version of the input. Note that source file here, as well as in other methods, means LP input source file created at ( ✖ Extraction stage ), not the user-facing annotation source file(s), so it will contain .lpinput extension and will be located at the path according to the corresponding extraction job's ✖ outDir .
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
✖ sourceFile
string, the path to the input file to invalidate, relative to the compile job's ✖ inRootDir .
Returns:
none
Errors:
The invalidateSourceFile can throw an error.
Note that a tag can be applied to a node by commands from multiple sources, so tags must only be removed after invalidation of all sources their application originates from, and only if they have not been re-added (see ✖ async .tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile }) ).
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
string, the path to the input file to invalidate, relative to the compile job's ✖ inRootDir .
Append content to the specified target node. Only text content is added this way, for other content components there are other methods.
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
✖ targetNodeName
string, the full FDOM name of the target node where content will be appended.
✖ content
string, the content to append to the target node.
✖ sourceFile
string, the path to the input file from which this content originates, relative to the compile job's ✖ inRootDir .
Returns:
none
Errors:
The appendContent can throw an error.
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
string, the full FDOM name of the target node where content will be appended.
string, the content to append to the target node.
string, the path to the input file from which this content originates, relative to the compile job's ✖ inRootDir .
Tag a target node with a specific tag node. Can also be worded as "tag (apply) the specific tag node on a given target node". That is, tagNodeName node will be added to list of targetNodeName node's tags.
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
✖ tagNodeName
string, the full FDOM name of the tag node to apply.
✖ targetNodeName
string, the full FDOM name of the target node on which the tagNodeName will be applied.
✖ sourceFile
string, the path to the input file from which this tagging originates, relative to the compile job's ✖ inRootDir . Storing the tagging origin makes sense in context of subsequent invalidation ( ✖ async .invalidateSourceFile({ modelOutput, sourceFile, newDependencies }) ) - a tag stays in effect as long as there remains at least one non-invalidated source for applying it. (Note that tagTo can be called for same tagNodeName and targetNodeName multiple times with different sourceFile-s.)
Returns:
none
Errors:
The tagTo can throw an error.
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
string, the full FDOM name of the tag node to apply.
string, the full FDOM name of the target node on which the tagNodeName will be applied.
string, the path to the input file from which this tagging originates, relative to the compile job's ✖ inRootDir . Storing the tagging origin makes sense in context of subsequent invalidation ( ✖ async .invalidateSourceFile({ modelOutput, sourceFile, newDependencies }) ) - a tag stays in effect as long as there remains at least one non-invalidated source for applying it. (Note that tagTo can be called for same tagNodeName and targetNodeName multiple times with different sourceFile-s.)
Per compile job, and per model update session (see ✖ async .closeModelOutput({ modelOutput }) ), ✖ async .invalidateSourceFile({ modelOutput, sourceFile, newDependencies }) is guaranteed to be called before any tagTo-s, and exactly once for every sourceFile for which any tagTo-s (and other content adding methods) are invoked. That means, if a tag stays after all source invalidations, then invocation of each tagTo will re-validate the tag application from the corresponding sourceFile.
Append an inline reference to the specified target node's content.
Arguments
Name
Description
✖ modelOutput
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
✖ targetNodeName
String, the full FDOM name of the target node where reference will be appended.
✖ refNodeName
String, the full FDOM name of the referenced node.
✖ refText
String, the alt text of the reference. Can be empty (and should be stored as such in the model, as generators can take it as hint for using an appropriate default display text).
✖ sourceFile
String, the path to the input file from which this reference originates, relative to the compile job's ✖ inRootDir .
Returns:
none
Errors:
The appendRef can throw an error.
Arguments (detailed)
Model output handle (as returned by ✖ async .openModelOutput({ itemConfig, workDir }) ).
String, the full FDOM name of the target node where reference will be appended.
String, the full FDOM name of the referenced node.
String, the alt text of the reference. Can be empty (and should be stored as such in the model, as generators can take it as hint for using an appropriate default display text).
String, the path to the input file from which this reference originates, relative to the compile job's ✖ inRootDir .
Methods
async .processCustomTag({ modelOutput, targetNodeName, tagName, toolkit, sourceFile })
async .openModelOutput({ itemConfig, workDir })
async .closeModelOutput({ modelOutput })
async .invalidateSourceFile({ modelOutput, sourceFile, newDependencies })
async .appendContent({ modelOutput, targetNodeName, content, sourceFile })
async .tagTo({ modelOutput, tagNodeName, targetNodeName, sourceFile })
async .appendRef({ modelOutput, targetNodeName, refNodeName, refText, sourceFile })
Generation writer is invoked when processing a generate job item. Its purpose is to read the supported compiled representation of FDOM (usually using a compilation reader) and generate the documentation or other output it is responsible for. It is specified by writer field of a generate job item ( ✖ writer ). Generation writer must be implemented as CommonJS module that exposes the following interface...
Members
Name
Description
✖ async .perform({ workDir, itemConfig, errors })
Perform the generation process.
The interface must be exposed by writer module via module.exports similar to:
exports.perform = async function perform({ workDir, itemConfig, errors }) { ... }
Members (detailed)
Perform the generation process.
Arguments
Name
Description
✖ workDir
String, the path to project root directory, ready for standalone use in fs or path.
✖ itemConfig
Dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The writer can read all members of the item config object, but it is a good style to keep writer-specific configuration under a member sub-object named after the generator.
✖ errors
Array, a collection of errors (as JS Error objects) encountered during processing that should be appended to.
Returns:
none
Errors:
The perform can throw an error, but it is recommended to do so only to mark the overall failure in the end, accumulating the intermediate errors in errors if possible.
Arguments (detailed)
String, the path to project root directory, ready for standalone use in fs or path.
Dictionary (as Object), the piece of configuration object related to this job's item ( ✖ items[] ). The writer can read all members of the item config object, but it is a good style to keep writer-specific configuration under a member sub-object named after the generator.
Array, a collection of errors (as JS Error objects) encountered during processing that should be appended to.
Methods
async .perform({ workDir, itemConfig, errors })
The renderer is invoked by ✖  ${LP_HOME}/lpgwrite-example: An example generator of single-page HTML/MD documentation generator. Its purpose is to produce the document of the format it supports (HTML, MD, etc.) according to document structure data supplied by lpgwrite-example. It is specified by renders[]/renderer field of a lpgwrite-example's generate job item ( ✖ renderer ). Renderer must be implemented as CommonJS module that exposes the following interface...
Members
Name
Description
✖ async .render({ workDir, rendererConfig, input, errors })
Render the output using the specified renderer configuration and input provided by the caller ( ✖  ${LP_HOME}/lpgwrite-example: An example generator of single-page HTML/MD documentation ).
The interface must be exposed by renderer module via module.exports similar to:
exports.render = async function render({ workDir, rendererConfig, input, errors }) { ... }
Members (detailed)
Render the output using the specified renderer configuration and input provided by the caller ( ✖  ${LP_HOME}/lpgwrite-example: An example generator of single-page HTML/MD documentation ).
Arguments
Name
Description
✖ workDir
String, the path to project root directory, ready for standalone use in fs or path.
✖ rendererConfig
Dictionary (as Object), the piece of configuration object related to this renderer item ( ✖ renders[] ). The renderer can read all members of the item config object, but it is a good style to keep renderer-specific configuration under a member sub-object named after the renderer.
✖ input
The input data to be rendered. Object of this format: ✖ Input format for lpgwrite-example renderer .
✖ errors
Array, a collection of errors (as JS Error objects) encountered during rendering that should be appended to.
Returns:
none
Errors:
The render can throw an error.
Arguments (detailed)
String, the path to project root directory, ready for standalone use in fs or path.
Dictionary (as Object), the piece of configuration object related to this renderer item ( ✖ renders[] ). The renderer can read all members of the item config object, but it is a good style to keep renderer-specific configuration under a member sub-object named after the renderer.
The input data to be rendered. Object of this format: ✖ Input format for lpgwrite-example renderer .
Array, a collection of errors (as JS Error objects) encountered during rendering that should be appended to.
Methods
async .render({ workDir, rendererConfig, input, errors })
Contains the document data to render. Dictionary (as Object) with the following members...
Members
Name
Description
✖ .toc[]
Array of items for table of contents. Each item is a dictionary (as Object) with the following members...
✖ .itemsByUid[uid]
Dicitonary by UID (string). Same items as ✖ .items[] expanded flat, but keyed by UID ( ✖ .uid ).
✖ .items[]
Array of items to display, ordered in the suggested display order when on single page. Each array element is dictionary (as Object) with the following members...
Members (detailed)
Array of items for table of contents. Each item is a dictionary (as Object) with the following members...
Members
Name
Description
✖ .title
String, the item's human readable title.
✖ .uid
String, the item's UID (key in ✖ .itemsByUid[uid] ).
✖ .subEntries[]
Array (non-null, at least empty), nested items of this TOC item. Each element has the same structure as a root element of .toc[], including next level .subEntries[] (and so on).
The items are ordered in suggested display order.
Members (detailed)
String, the item's human readable title.
String, the item's UID (key in ✖ .itemsByUid[uid] ).
Array (non-null, at least empty), nested items of this TOC item. Each element has the same structure as a root element of .toc[], including next level .subEntries[] (and so on).
The items are ordered in suggested display order.
Dicitonary by UID (string). Same items as ✖ .items[] expanded flat, but keyed by UID ( ✖ .uid ).
Array of items to display, ordered in the suggested display order when on single page. Each array element is dictionary (as Object) with the following members...
Members
Name
Description
✖ .uid
String, the item's UID, can be used to access this item via ✖ .itemsByUid[uid] .
✖ .modelBasic[]
Basic part of item's model to be visible in brief display mode, is always shown. The array (non-null, at least empty) of elements in the display order, each element can contain some of the following members...
✖ .modelMore[]
The additional part of the item's model to display in full display mode, in addition to basic one. The array (non-null, at least empty) that can contain the same elements as ✖ .modelBasic[] .
Members (detailed)
String, the item's UID, can be used to access this item via ✖ .itemsByUid[uid] .
Basic part of item's model to be visible in brief display mode, is always shown. The array (non-null, at least empty) of elements in the display order, each element can contain some of the following members...
Members
Name
Description
✖ .itemTitle
String. If the member is present, it marks this element is an item title, and contains the human readable text of the item title.
✖ .uid
String, defined only if ✖ .itemTitle is present. UID of the target (or titled) item, the same as ✖ .uid .
✖ .item
String. If the member is present, it marks this element is placeholder for emitting a nested item, and contains UID of this item, the same ✖ .uid . Note that same item (with same UID) can occur multiple times in the document, and one of these occurrences will be suggested as home (primary) location for the item - check ✖ .isHomeLocation if this matters for the rendered document format.
✖ .isHomeLocation
Boolean, defined only if ✖ .item is present. If true, this location is suggested as the item's home location. There is only one home location for each item.
✖ .printType
String, defined only if ✖ .item is present. Defines the suggested display mode for item emitted into this placeholder. Can be either of:
  • "brief": only brief part of the item data should be displayed
  • "full": the full item data should be displayed
✖ .text
String. If the member is present, it is a Markdown text fragment. Some of HTML-like tags, case-sensitive, should be interpreted as LP inline references (the text properties are HTML-encoded):
✖ .openSection
String. If the member is present, it marks this element is opener of a titled section, and contains the section ID to be matched in later ✖ .closeSection .
✖ .closeSection
String. If the member is present, it marks this element is closure of a titled section, and contains the section ID to close, matching earlier ✖ .openSection .
✖ .title
String, defined only if ✖ .openSection is present. Title of the opened section.
✖ .table
If the member is present, it marks a table block. Object with the following member properties...
✖ .list[][]
Array of arrays of string. If the member is present, it marks this element is a (flat unnumbered) list. Each element of the array is a list item, each sub-element is a markdown text (same as in ✖ .text ), the sub-elements and are assumed to be appended in the array order to form the line.
Members (detailed)
String. If the member is present, it marks this element is an item title, and contains the human readable text of the item title.
String, defined only if ✖ .itemTitle is present. UID of the target (or titled) item, the same as ✖ .uid .
String. If the member is present, it marks this element is placeholder for emitting a nested item, and contains UID of this item, the same ✖ .uid . Note that same item (with same UID) can occur multiple times in the document, and one of these occurrences will be suggested as home (primary) location for the item - check ✖ .isHomeLocation if this matters for the rendered document format.
Boolean, defined only if ✖ .item is present. If true, this location is suggested as the item's home location. There is only one home location for each item.
String, defined only if ✖ .item is present. Defines the suggested display mode for item emitted into this placeholder. Can be either of:
  • "brief": only brief part of the item data should be displayed
  • "full": the full item data should be displayed
String. If the member is present, it is a Markdown text fragment. Some of HTML-like tags, case-sensitive, should be interpreted as LP inline references (the text properties are HTML-encoded):
  • <lp-src file="filename"></lp-src> (no inner tag text, file is HTML-encoded): inline reference to LP input source file, with no .lpinput suffix. Is always present, it is up to the renderer to strip it or to interpret it.
  • <lp-ref uid="UID" text="display text"></lp-ref> (no inner tag text, uid and text are HTML-encoded): inline LP link to an item (as per <#ref ...#>). UID is the same as ✖ .uid . Display text can be empty, in which case it is recommended to use item's title ( ✖ title ).
String. If the member is present, it marks this element is opener of a titled section, and contains the section ID to be matched in later ✖ .closeSection .
String. If the member is present, it marks this element is closure of a titled section, and contains the section ID to close, matching earlier ✖ .openSection .
String, defined only if ✖ .openSection is present. Title of the opened section.
If the member is present, it marks a table block. Object with the following member properties...
Members
Name
Description
✖ .headers[]
Array of headers, in the display order of columns. Each element is a string with column header as markdown text (same as in ✖ .text ).
✖ .rows[]
Array of rows, in the display order. Each element is array of columns, in the display order of columns, with each sub-element is a string with column data as markdown text (same as in ✖ .text ).
Members (detailed)
Array of headers, in the display order of columns. Each element is a string with column header as markdown text (same as in ✖ .text ).
Array of rows, in the display order. Each element is array of columns, in the display order of columns, with each sub-element is a string with column data as markdown text (same as in ✖ .text ).
Array of arrays of string. If the member is present, it marks this element is a (flat unnumbered) list. Each element of the array is a list item, each sub-element is a markdown text (same as in ✖ .text ), the sub-elements and are assumed to be appended in the array order to form the line.
The additional part of the item's model to display in full display mode, in addition to basic one. The array (non-null, at least empty) that can contain the same elements as ✖ .modelBasic[] .
The translator is invoked by ✖ ${LP_HOME}/lpgwrite-i18n-assist: Auxiliary generator for localization assistance generator. Its purpose is to return initial translation for a given string, according to the locale information supplied. It is specified by translator field of a lpgwrite-example's generate job item (lpgwrite-example/translator). Translator must be implemented as CommonJS module that exposes the following interface...
Members
Name
Description
✖ async .translate(str, translatorArgs)
Perform the translation and return the result.
The interface must be exposed by translator module via module.exports similar to:
exports.translate = async function translate(str, translatorArgs) { ... }
Members (detailed)
Perform the translation and return the result.
Arguments
Name
Description
✖ str
String, the string to translate. The string is assumed to be a Markdown code, with the following possible HTML-ish tags:
  • <lp-ref item="FDOM name">ref alt text</lp-ref>: a LP inline link. The ref alt text can be translated (note it is HTML-encoded), the rest part must be left intact.
  • <lp-tag>...text...</lp-tag>: a custom markup tag (as per ✖ extraTags ). ...text... is the HTML-encoded JSON code of the object and should remain such in the translation result. It is up to the translator to be aware of what custom tags are possible and what is the correct translation scope within them. In an unidentifiable case, this fragment should be left as is.
✖ translatorArgs
The agrument value specified for the translator in ✖ translatorArgs . Passed as the object from configuration as is.
Returns:
The translated ✖ str , given the caveats mentioned there. It is also recommended to add a proof read hint mark to the translated string.
Errors:
The translate can throw an error.
Arguments (detailed)
String, the string to translate. The string is assumed to be a Markdown code, with the following possible HTML-ish tags:
  • <lp-ref item="FDOM name">ref alt text</lp-ref>: a LP inline link. The ref alt text can be translated (note it is HTML-encoded), the rest part must be left intact.
  • <lp-tag>...text...</lp-tag>: a custom markup tag (as per ✖ extraTags ). ...text... is the HTML-encoded JSON code of the object and should remain such in the translation result. It is up to the translator to be aware of what custom tags are possible and what is the correct translation scope within them. In an unidentifiable case, this fragment should be left as is.
The agrument value specified for the translator in ✖ translatorArgs . Passed as the object from configuration as is.
Methods
async .translate(str, translatorArgs)
This writer writes FDOM output to a JSON file. A complementing reader is ✖ logipard/lpgread-basic-json.js . Usage of this writer for a compile-stage item is enabled by writer: "${LP_HOME}/lpcwrite-basic-json" $ in ✖ writer .
This writer uses extra member lpcwrite-basic-json in the compilation item configuration:
{
	...
	writer: "${LP_HOME}/lpcwrite-basic-json" $, // paste verbatim!
	lpcwrite-basic-json: {
		outFile: ...,
		extraTags: { // optional
			...
		}
	}
}
The extra member in an ✖ items[] entry that contains configuration for lpcwrite-basic-json.
Members
Name
Description
✖ extraTags
Object, as a dictionary tagName: tagContentType. Describes additional tags that the writer will recognize in the input. These will appear in the compiled model via customTag objects (see ✖ .content [lpgread-basic-json] ). The custom tags not described here will be ignored with a warning. Note that tag names are case-insensitive (will be lowercased).
✖ outFile
String. Path to the output JSON file (non-absolute path is relative to project root). The file is overwritten, but the model representation possibly already existing there will be updated rather than rebuilt from scratch, attempting to preserve the parts of data for which the input didn't actually change.
Members (detailed)
Object, as a dictionary tagName: tagContentType. Describes additional tags that the writer will recognize in the input. These will appear in the compiled model via customTag objects (see ✖ .content [lpgread-basic-json] ). The custom tags not described here will be ignored with a warning. Note that tag names are case-insensitive (will be lowercased).
The dictionary format is:
extraTags: {
	"tagName1": <tag1 content type>, // string
	"tagName2": <tag2 content type>, // string
	...
}
Properties
An embedded file. The customTag object will be { name: "<tag-name>", file: "data:;base64,...file binary data as base64 data URL..." }
Text. The customTag object will be { name: "<tag-name>", text: "...the inside of the tag as plain text..." }
String. Path to the output JSON file (non-absolute path is relative to project root). The file is overwritten, but the model representation possibly already existing there will be updated rather than rebuilt from scratch, attempting to preserve the parts of data for which the input didn't actually change.
This generation writer produces human readable documentation extracted and structured according to document program (see ✖ ${LP_HOME}/lpgwrite-example document program ). lpgwirte-example by itself only determines general, format-agnostic structure of the document, while rendering of the actual document is delegated to sub-plugin named renderer. As the title suggests, built-in renderers for single-page HTML and single-page MD are available, but in fact the user can plug in its own renderers.
lpgwrite-example adds some extra FDOM comprehensions:
  • a member named %title contains human readable title for the item. If there is no %title member, the title is assumed to be the same as the item's short name. It typically follows the item opening part in the pattern: <#./%title: Your title#> (note it is a mistake to omit ./ or :, it may need some training to unlearn doing this).
  • the text content, aside from special field values and LP tags, is assumed to be Markdown formatted text (see the MD reference e. g. here)
  • first paragraph of the item's text content, provided it is not a list element or a non-inline code block, is considered a brief information. Together with the rest part of the item's text content, it makes full information.
This writer uses extra member lpgwrite-example in the generation item configuration ( ✖  ${LP_HOME}/lpgwrite-example: An example generator of single-page HTML/MD documentation ):
{
	...
	writer: "${LP_HOME}/lpgwrite-example" $, // paste verbatim!
	lpgwrite-example: {
		trace: ...,
		program: [...],
		renders: [
			{
				docModel: ...,
				renderer: ...,
				... // renderer-specific added config
			},
			...
		]
	}
}
The members of lpgwrite-example object, as follows:
Members
Name
Description
✖ renders[]
The list of sub-jobs to actually render a document. In addition to the members listed below, can contain additional members with renderer specific configuration fragments.
✖ trace
Boolean, optional. If true, then document program processing will have some added log verbosity to allow you tracking the details of what is and is not done.
✖ program[]
An array of document program instructions (see ✖ ${LP_HOME}/lpgwrite-example document program )
Members (detailed)
The list of sub-jobs to actually render a document. In addition to the members listed below, can contain additional members with renderer specific configuration fragments.
Members
Name
Description
✖ renderer
String, path to the renderer module. The renderer must comply with ✖ Interface for lpgwrite-example's renderer . Logipard comes with the following built-in renderers...
✖ docModel
String, document model to use. Refers to docModel in the document program, specifically to value of name in ✖ Document model definition .
Members (detailed)
String, path to the renderer module. The renderer must comply with ✖ Interface for lpgwrite-example's renderer . Logipard comes with the following built-in renderers...
  • HTML ( ✖  ${LP_HOME}/lpgwrite-example-render-html: HTML renderer for lpgwrite-example generator )
  • Markdown ( ✖  ${LP_HOME}/lpgwrite-example-render-md: Markdown renderer for lpgwrite-example generator )
String, document model to use. Refers to docModel in the document program, specifically to value of name in ✖ Document model definition .
Boolean, optional. If true, then document program processing will have some added log verbosity to allow you tracking the details of what is and is not done.
An array of document program instructions (see ✖ ${LP_HOME}/lpgwrite-example document program )
The generator assists in language translation of JSON-backed FDOM files compiled by ✖ ${LP_HOME}/lpcwrite-basic-json: Writer of FDOM into JSON file . The idea is to extract translatable text into human readable and editable translation interim files, and then keep up-to-date translated clones of the source JSON FDOM files backed by this translation. The interim files, in turn, are kept up-to-date with the source FDOM file and can be stored in VCS alongside the project code. Initial translation is delegated to sub-plugin named translator. A builtin dummy translator lpgwrite-i18n-assist-trn-none is provided, but the user can in fact plug in its own translators.
The lpgwrite-i18n-assist forms sort of "sub-pipeline" - its output is intermediate and is meant to be picked by actual generators that follow it in the lp-generate items list. Hence, place lpgwrite-i18n-assist item before the generators that rely on its result.
lpgwrite-i18n-assist adds extra FDOM comprehension:
  • %title member, if available, contains a human readable title for the parent item (similarly to ✖ ${LP_HOME}/lpgwrite-example )
  • if %title member is tagged with tag %noloc, this title is assumed non-localizeable - it will not be included into the interim file, and will be transferred to the translated FDOM file as is
E. g., <#./%title: This title will be translated#>, and <#./%title %noloc: This title will not be translated#>. This comprehension does not conflict with one from ✖ ${LP_HOME}/lpgwrite-example and complements it seamlessly.
This renderer uses extra member lpgwrite-i18n-assist in the generation item configuration:
...
{
	inFile: ...,
	writer: "${LP_HOME}/lpgwrite-i18n-assist" $, // paste verbatim!
	lpgwrite-i18n-assist: {
		translator: ...,
		items: [
			{
				outFile: ...,
				interimFile: ...,
				interimFileCharset: ...,
				translatorArgs: ...
			}
		]
	}
},
...
The members of lpgwrite-i18n-assist object, as follows:
Members
Name
Description
✖ items[]
Array. Items to process within this lpgwrite-i18n-assist job, using the same translator specified by the ✖ translator . Each items[] element is an object as follows:
✖ translator
String, path to the translator module, absolute or relative to project root. The translator must comply with ✖ Interface for lpgwrite-i18n-assist's translator . Logipard comes with the built-in dummy translator ✖  ${LP_HOME}/lpgwrite-i18n-assist-trn-none: Dummy translator for lpgwrite-i18n-assist generator .
Members (detailed)
Array. Items to process within this lpgwrite-i18n-assist job, using the same translator specified by the ✖ translator . Each items[] element is an object as follows:
Members
Name
Description
✖ translatorArgs
Arbitrary JSON/LPSON value, optional (default = null). The object or value that will be transferred to the translator's method ✖ async .translate(str, translatorArgs) .
✖ outFile
String. Path to the output JSON FDOM file with the translated text, absolute or relative to project root. Assumed to have .json extension.
✖ interimFile
String. Path to the interim translation file, absolute or relative to project root. Essentially an almost-plain text file, so assumed to have .txt extension.
✖ interimFileCharset
String, optional (default = "utf-8"). The charset to use in the interim file.
Members (detailed)
Arbitrary JSON/LPSON value, optional (default = null). The object or value that will be transferred to the translator's method ✖ async .translate(str, translatorArgs) .
String. Path to the output JSON FDOM file with the translated text, absolute or relative to project root. Assumed to have .json extension.
String. Path to the interim translation file, absolute or relative to project root. Essentially an almost-plain text file, so assumed to have .txt extension.
String, optional (default = "utf-8"). The charset to use in the interim file.
String, path to the translator module, absolute or relative to project root. The translator must comply with ✖ Interface for lpgwrite-i18n-assist's translator . Logipard comes with the built-in dummy translator ✖  ${LP_HOME}/lpgwrite-i18n-assist-trn-none: Dummy translator for lpgwrite-i18n-assist generator .
The interim file looks like this...
...
## Item: /domain.logipard/interfaces/compile/%title
# lp-stage-plugin-ifs.lp-txt
/ "Para:Ev+yL9F/vTiMmuKTf0MCOtkPdxbajKJGYcTegdUiEhKX4g0C7A+PMVsfHPOVu90ZRrksqgrsekUutwoGUA72zw=="
Interfaces related to compilation stage
\ "Para:Ev+yL9F/vTiMmuKTf0MCOtkPdxbajKJGYcTegdUiEhKX4g0C7A+PMVsfHPOVu90ZRrksqgrsekUutwoGUA72zw=="

## Item: /domain.logipard/interfaces/compile
## Item: /domain.logipard/interfaces/compile/writer-toolkit/%title
# internal/lp-compile-tools.js
/ "Para:vGDelX4EnoLn07hY9QgDuASeK7cUvLxrere0vuqNEu/pOGNVoVfpoUEsEtI0IW/gLrN3w2BHhUdktg51eEeEKg=="
Compile stage writer toolkit for custom tag processor
\ "Para:vGDelX4EnoLn07hY9QgDuASeK7cUvLxrere0vuqNEu/pOGNVoVfpoUEsEtI0IW/gLrN3w2BHhUdktg51eEeEKg=="
...
The / "Para:..."...\ "Para:..." lines delimit the translated content, which you can edit manually. lpgwrite-i18n-assist will not overwrite them unless the corresponding pieces of the original content are changed (although they can move around to modification of FDOM structure).
For better maintainability and reduction of re-translation efforts required on content mutation, lpgwrite-i18n-assist keeps granularity of the editable units per paragraph or list item, keeps them grouped by item and retaining the model order.
Don't modify the Para lines themselves or the codes in it - these are tags to match these fragments against their counterpart in the original content. The lines outside should be treated as comments for navigation convenience, and are subject for changes with no warranties.
This reader is able to read FDOM from JSON file compiled by ✖ ${LP_HOME}/lpcwrite-basic-json: Writer of FDOM into JSON file . It follows the recommended model reader interface outline: ✖ Suggested compiled FDOM reader interface .
Members
Name
Description
✖ async loadFromFile(filePath [, extractSrcFile])
Load the model into memory and expose for reading in FDOM comprehension ( ✖ FDOM querying ). Module level function.
Usage example (assuming you have Logipard installed globally or as node module):
const { loadFromFile } = require('logipard/lpgread-basic-json');

async main() { // the loader API is async

	var reader = await loadFromFile("your-fdom.json");

	// assuming your model contains the items named as below...
	reader.nameAlias("domain.your.program", "M"); // set name alias
	var classesSection = reader.item("M/classes"); // <Item>, domain.your.program/classes
	var classA = reader.item(classesSection, "classA"); // <Item>, domain.your.program/classes/classA

	// let's find items for all classes extended by A and print their titles
	var extends = reader.item("%extends"); // %extends
	var queryCtxItemsExtByA = reader.newQueryContext(); // <QueryContext>
	var itemsExtByA = queryCtxItemsExtByA.with(classA) // or .with(queryCtxItemsExtByA.collection(classA))
		.query({ inMembersThat: { named: "^%extends$" }, recursive: true, query: { tagsThat: true }})
		.teardownCollection(); // itemsExtByA = <Collection>

	for (var itemExtByA of itemsExtByA) { // itemExtByA = <Item>
		console.log(reader.item(itemExtByA, "%title").content[0]); // assume all items have %title members with plain-text only content
	}
}
Members (detailed)
Load the model into memory and expose for reading in FDOM comprehension ( ✖ FDOM querying ). Module level function.
Arguments
Name
Description
✖ filePath
String. Path (same as for Node.JS fs methods) to the FDOM JSON file.
✖ extractSrcFile
Bool, optional (default is false). If true, then references to LP source file names will be added as inline text fragments. Can be useful when reading for or with regard to diagnostic purposes.
Returns:
The reader handle, ✖ <Reader> [lpgread-basic-json]
Errors:
loadFromFile can throw an error.
Usage:
const { loadFromFile } = require('logipard/lpgread-basic-json.js');
var reader = await loadFromFile("my-fdom-file.json");
Arguments (detailed)
String. Path (same as for Node.JS fs methods) to the FDOM JSON file.
Bool, optional (default is false). If true, then references to LP source file names will be added as inline text fragments. Can be useful when reading for or with regard to diagnostic purposes.
Methods
async loadFromFile(filePath [, extractSrcFile])
Reader object, the primary handle for access to the loaded FDOM.
Members
Name
Description
✖ .item([itemRelTo,] name)
Get an item by its full or relative FDOM name. Similar to ✖ .item([baseItem ,] name) , but can not support aliases since it is used outside a context.
✖ .itemByUid(uid)
Return an item by UID (see ✖ .uid ). Since this is a reader-specific method not prescribed by FDOM comprehension, it can return null for non-existent item. Same as ✖ .itemByUid(uid) .
✖ .newQueryContext()
Create a new query context object.
Members (detailed)
Get an item by its full or relative FDOM name. Similar to ✖ .item([baseItem ,] name) , but can not support aliases since it is used outside a context.
Arguments
Name
Description
✖ itemRelTo
Optional. If specified, it denotes an item that is considered as base for ✖ name path, which is considered a relative path in this case. Can be either of:
✖ name
The item name, full if itemRelTo is not provided, or relative to it otherwise.
Returns:
the item, as ✖ <Item> [lpgread-basic-json] . Note that, according to FDOM querying paradigm, it is never a null value: if the item is effectively not existing, a null item is returned.
Arguments (detailed)
Optional. If specified, it denotes an item that is considered as base for ✖ name path, which is considered a relative path in this case. Can be either of:
  • string: the full name of base item as string
  • array: the full name of base item, split into array of short names
  • ✖ <Item> [lpgread-basic-json] : item specified via its direct object
The item name, full if itemRelTo is not provided, or relative to it otherwise.
Return an item by UID (see ✖ .uid ). Since this is a reader-specific method not prescribed by FDOM comprehension, it can return null for non-existent item. Same as ✖ .itemByUid(uid) .
Arguments
Name
Description
✖ uid
String. The item UID, as returned by its .uid property.
Returns:
✖ <Item> [lpgread-basic-json] , or null.
Arguments (detailed)
String. The item UID, as returned by its .uid property.
Create a new query context object.
Returns:
✖ <Context> [lpgread-basic-json]
Methods
.item([itemRelTo,] name)
.itemByUid(uid)
.newQueryContext()
A FDOM item, as implemented in ✖ logipard/lpgread-basic-json.js .
Extends (is a)
  • ✖ <Item>
Members
Name
Description
✖ .content [lpgread-basic-json]
Read-only property, array of content elements. Implements ✖ .content in ✖ logipard/lpgread-basic-json.js specific flavour.
✖ .uid
Read-only property, string. The item's shortcut UID in the JSON representation of the model.
✖ .toString()
JS stringification
Members from extents
Name
Description
✖ .content
Read-only property. The item content (text, inline references, and whatever else the reader's backing model supports).
✖ .name
Read-only property, string. The item's full path name (with no namespace aliases)
✖ .shortName
Read-only property, string. The item's short name (last segment of the full path name).
✖ .tags
Read-only property. Collection of the item's tags.
✖ .members
Read-only property. Collection of the item's members.
✖ .isNull
Read-only property, bool. Check if item is empty (true) or not (false).
✖ .parent
Read-only property, ✖ <Item> . Return parent item (one this item is member to). For root item returns null (not null item).
✖ .isConditionTrue(lpqCtx, condSpec)
Check if the item satisfies a certain condition, must be done relative to a query context (in order to resolve condition and collection aliases).
Members (detailed)
Read-only property, array of content elements. Implements ✖ .content in ✖ logipard/lpgread-basic-json.js specific flavour.
Each element is either of:
  • string: plain text piece of content, interpretation is up to the user and the particular context in user-level model.
  • object { ref: <Item>, text: string } (ref is ✖ <Item> [lpgread-basic-json] ): inline item ref
  • object { customTag: object }: a custom tag, originating from ✖ ${LP_HOME}/lpcwrite-basic-json: Writer of FDOM into JSON file
Read-only property, string. The item's shortcut UID in the JSON representation of the model.
JS stringification
Members from extents (detailed)
.content
.name
.shortName
.tags
.members
.isNull
.parent
.isConditionTrue(lpqCtx, condSpec)
Methods from extents
.isConditionTrue(lpqCtx, condSpec)
A FDOM collection, as implemented in ✖ logipard/lpgread-basic-json.js .
Extends (is a)
  • ✖ <Item>
Members
Name
Description
✖ .toString()
JS stringification
Members from extents
Name
Description
✖ .content
Read-only property. The item content (text, inline references, and whatever else the reader's backing model supports).
✖ .name
Read-only property, string. The item's full path name (with no namespace aliases)
✖ .shortName
Read-only property, string. The item's short name (last segment of the full path name).
✖ .tags
Read-only property. Collection of the item's tags.
✖ .members
Read-only property. Collection of the item's members.
✖ .isNull
Read-only property, bool. Check if item is empty (true) or not (false).
✖ .parent
Read-only property, ✖ <Item> . Return parent item (one this item is member to). For root item returns null (not null item).
✖ .isConditionTrue(lpqCtx, condSpec)
Check if the item satisfies a certain condition, must be done relative to a query context (in order to resolve condition and collection aliases).
Members (detailed)
JS stringification
Members from extents (detailed)
.content
.name
.shortName
.tags
.members
.isNull
.parent
.isConditionTrue(lpqCtx, condSpec)
Methods from extents
.isConditionTrue(lpqCtx, condSpec)
A FDOM compiled query object, as implemented in ✖ logipard/lpgread-basic-json.js .
Extends (is a)
  • ✖ <Query>
A FDOM query context, as implemented in ✖ logipard/lpgread-basic-json.js .
Extends (is a)
  • ✖ <QueryContext>
Members
Name
Description
✖ .itemByUid(uid)
Return an item by UID (see ✖ .uid ). Since this is a reader-specific method not prescribed by FDOM comprehension, it can return null for non-existent item.
✖ .clearNameAlias(aliasName)
Clear item name alias set by ✖ .nameAlias(aliasName, item) . The alias is no longer valid until re-assigned.
✖ .clearCollectionAlias(collectionAliasName)
Clear collection alias set by ✖ .collectionAlias(aliasName, ...collectionSpecs) . The alias is no longer valid until re-assigned.
✖ .clearQueryAlias(queryAliasName)
Clear query name alias set by ✖ .queryAlias(aliasName, ...querySpecs) . The alias is no longer valid until re-assigned.
✖ .clearConditionAlias(conditionAliasName)
Clear condition name alias set by ✖ .conditionAlias(aliasName, condSpec) . The alias is no longer valid until re-assigned.
Members from extents
Name
Description
✖ .nameAlias(aliasName, item)
Set an item alias name (which should be a valid shortname), that can later be used as standalone item name or as starter for another item name within this ✖ <QueryContext> . Behaviour in case of already existing alias with the given name is implementation specific.
✖ .collectionAlias(aliasName, ...collectionSpecs)
Set a named collection alias that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). The collection is built up from collections corresponding to each element of the specs list. This alias is permament within the context, unlike query local alias ( ✖ Set local collection alias ["alias ..."] ).
✖ .queryAlias(aliasName, ...querySpecs)
Set a named query alias that can be used later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
✖ .conditionAlias(aliasName, condSpec)
Set a named condition alias that can be used later to reference the condition within this context ( ✖ <Condition> ).
✖ .item([baseItem ,] name)
Return item by given path name, either full or relative to the provided base item. The full item name's first segment shortname can be a name alias defined in this ✖ <QueryContext> .
✖ .collection(...collectionSpecs)
Returns a collection specified by a list of collection item specs. Each list item is a ✖ <CollectionSpec> .
✖ .with(...collectionSpecs)
Set current collection for the subsequent query (call to ✖ .query(...querySpecs) ). Collection is built up from collections corresponding to each element of the specs list. .with effectively initiates the query chain, but can be used in the middle of the chain as well to override the current collection after a certain step.
✖ .query(...querySpecs)
Perform a query, or a list of queries intepreted as a composite query, given the current collection specified by preceding ✖ .with(...collectionSpecs) or resulting from previous .query calls. Note that the resulting collection is not returned immediately, it becomes new current collection instead.
✖ .teardownCollection()
Finalize the query and return the result (the current collection at time of the call). The current collection itself is reset, so the next query must be re-initialized, starting over from ✖ .with(...collectionSpecs) .
✖ .currentCollectionAlias(aliasName)
Set a named collection alias for the current collection that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). Only is usable mid query (when the current collection is meaningful), otherwise it is an error. This is a local query alias, unlike a permament one ( ✖ Set local collection alias ["alias ..."] ).
✖ .compileQuery(...querySpecs)
Compile a query into a handle object usable later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
Members (detailed)
Return an item by UID (see ✖ .uid ). Since this is a reader-specific method not prescribed by FDOM comprehension, it can return null for non-existent item.
Arguments
Name
Description
✖ uid
String. The item UID, as returned by its .uid property.
Returns:
✖ <Item> [lpgread-basic-json] , or null.
Arguments (detailed)
String. The item UID, as returned by its .uid property.
Clear item name alias set by ✖ .nameAlias(aliasName, item) . The alias is no longer valid until re-assigned.
Arguments
Name
Description
✖ aliasName
String. The alias name.
Returns:
Self ( ✖ <Context> [lpgread-basic-json] ), allowing to chain more calls
Arguments (detailed)
String. The alias name.
Clear collection alias set by ✖ .collectionAlias(aliasName, ...collectionSpecs) . The alias is no longer valid until re-assigned.
Arguments
Name
Description
✖ collectionAliasName
String. The alias name.
Returns:
Self ( ✖ <Context> [lpgread-basic-json] ), allowing to chain more calls
Arguments (detailed)
String. The alias name.
Clear query name alias set by ✖ .queryAlias(aliasName, ...querySpecs) . The alias is no longer valid until re-assigned.
Arguments
Name
Description
✖ queryAliasName
String. The alias name.
Returns:
Self ( ✖ <Context> [lpgread-basic-json] ), allowing to chain more calls
Arguments (detailed)
String. The alias name.
Clear condition name alias set by ✖ .conditionAlias(aliasName, condSpec) . The alias is no longer valid until re-assigned.
Arguments
Name
Description
✖ conditionAliasName
String. The alias name.
Returns:
Self ( ✖ <Context> [lpgread-basic-json] ), allowing to chain more calls
Arguments (detailed)
String. The alias name.
Members from extents (detailed)
.nameAlias(aliasName, item)
.collectionAlias(aliasName, ...collectionSpecs)
.queryAlias(aliasName, ...querySpecs)
.conditionAlias(aliasName, condSpec)
.item([baseItem ,] name)
.collection(...collectionSpecs)
.with(...collectionSpecs)
.query(...querySpecs)
.teardownCollection()
.currentCollectionAlias(aliasName)
.compileQuery(...querySpecs)
Methods
.itemByUid(uid)
.clearNameAlias(aliasName)
.clearCollectionAlias(collectionAliasName)
.clearQueryAlias(queryAliasName)
.clearConditionAlias(conditionAliasName)
Methods from extents
.nameAlias(aliasName, item)
.collectionAlias(aliasName, ...collectionSpecs)
.queryAlias(aliasName, ...querySpecs)
.conditionAlias(aliasName, condSpec)
.item([baseItem ,] name)
.collection(...collectionSpecs)
.with(...collectionSpecs)
.query(...querySpecs)
.teardownCollection()
.currentCollectionAlias(aliasName)
.compileQuery(...querySpecs)
This renderer for ✖ ${LP_HOME}/lpgwrite-example produces documentation as single HTML page with navigation facilities, like the one you are reading now.
This renderer uses extra member lpgwrite-example-render-html in the ✖ renders[] generation item configuration:
lpgwrite-example: {
	...
	renders: [
		{
			docModel: ...,
			renderer: "${LP_HOME}/lpgwrite-example-render-html" $, // paste verbatim!
			lpgwrite-example-render-html: {
				outFile: ...,
				emitToc: ...,
				inTemplateFile: "logipard-doc.tpl.html",
				cssClasses: {
					// all of these are optional, the whole cssClasses can be skipped at all
					itemTitle: ...,
					rawTitle: ...,
					paragraph: ...,
					verbatimSpan: ...,
					linkSpan: ...,
					moreSpan: ...,
					elsewhereSpan: ...,
					actionSpan: ...,
					offSiteBlock: ...
				},
				htmlPlaceholder: ...,
				cssPlaceholder: ...,
				extraTokens: {
					TOKEN_ID: "token value",
					ANOTHER_TOKEN_ID: "token value 2",
					...
				},
				localizedKeywords: {
					// adjust these according to the target locale
					SNAPBACK: "Snapback",
					SNAPBACK_AND_SCROLL: "Snapback & Scroll",
					ELEVATE: "Elevate",
					RESET: "Reset",
					ELEVATE_TO: "Elevate to...",
					COPY_ITEM_NAME: "Copy this item's LP FDOM full name to clipboard:",
					ITEM_UNFOLDED_ELSEWHERE: "Item unfolded elsewhere on page, click/tap to unfold here...",
					MORE: "More... >>",
					TABLE_OF_CONTENTS: "Table of contents"
				},
				addSourceRef: ...
			}
		},
		...
	]
}
The lpgwrite-example-render-html object inside the corresponding renders[] item, with the following members...
Members
Name
Description
✖ outFile
String. Path to the output document file (.html) to write, absolute or relative to the project root.
✖ emitToc
Boolean, optional (default = true). If true, then the renderer will add TOC section to the document.
✖ inTemplateFile
String. Path to the template file for the output HTML, absolute or relative to the project root. The template is blueprint for the resulting HTML file with added placeholders for generated CSS, HTML, and possible extra tokens.
✖ cssClasses
Dictionary of strings, optional. The CSS classes to apply to certain elements of the output document. Note these ones are meant to be cascaded with lpgwrite-example-render-html's generated classes that determine layout, so should only contain the data that affects appearance (font, color, background, padding, etc.), not layout (display, grid or related, flex or related, position, z-order, etc.).
✖ htmlPlaceholder
String. The exact placeholder string to replace with generated HTML code, should be placed inside <body> tag. The inserted code will be wrapped into a single <div> element with no explicit classes or styling directly on it.
✖ cssPlaceholder
String. The exact placeholder string to replace with generated CSS code, should be placed inside <style> tag, outside any block.
✖ extraTokens
Dictionary of strings, optional. Any additional tokens to substitute in the template. The keys are exact placeholder strings to replace (they should not duplicate htmlPlaceholder, cssPlaceholder, or each other), the values is the raw HTML code to insert in their places.
✖ localizedKeywords
Dictionary of strings, optional. The list of strings used for certain UI purposes in the generated document, expected to be appropriate for the document's target locale. These strings are plain text.
✖ addSourceRef
Boolean, optional (default = false). If set to true, then the generator will add source file names to the text fragments, it will help to remind the origin for a particular text piece. This mode is useful while proof reading and debugging of the draft document, especially as your project and the information across it grows sufficiently multi-file.
Members (detailed)
String. Path to the output document file (.html) to write, absolute or relative to the project root.
Boolean, optional (default = true). If true, then the renderer will add TOC section to the document.
String. Path to the template file for the output HTML, absolute or relative to the project root. The template is blueprint for the resulting HTML file with added placeholders for generated CSS, HTML, and possible extra tokens.
Dictionary of strings, optional. The CSS classes to apply to certain elements of the output document. Note these ones are meant to be cascaded with lpgwrite-example-render-html's generated classes that determine layout, so should only contain the data that affects appearance (font, color, background, padding, etc.), not layout (display, grid or related, flex or related, position, z-order, etc.).
The object can contain the following members, all of them are strings and are optional (the generator will use defaults if needed):
  • itemTitle: class for an item title element (the big clickable title with navigation elements)
  • rawTitle: class for a non-item title element (the secondary title, like 'Notes' or 'Members')
  • paragraph: class for a generic inline paragraph of text
  • verbatimSpan: class for an inline code fragment (like this one). It doesn't affect code blocks - those are rendered as <code> tags and are styled via them.
  • linkSpan: class for a Logipard inline link
  • moreSpan: class for a clickable "More..." label visible in item brief view mode
  • elsewhereSpan: class for "Item is located elsewhere..." text visible on folded-out item placeholder
  • actionSpan: class for the actions on item title ("Snapback" etc.), note that the affected elements are children to the title which is affected by itemTitle
  • offsiteBlock: class to apply on item which is unfolded at its non-home location (default implementation is a blue outline on top, bottom and left)
String. The exact placeholder string to replace with generated HTML code, should be placed inside <body> tag. The inserted code will be wrapped into a single <div> element with no explicit classes or styling directly on it.
String. The exact placeholder string to replace with generated CSS code, should be placed inside <style> tag, outside any block.
Dictionary of strings, optional. Any additional tokens to substitute in the template. The keys are exact placeholder strings to replace (they should not duplicate htmlPlaceholder, cssPlaceholder, or each other), the values is the raw HTML code to insert in their places.
Dictionary of strings, optional. The list of strings used for certain UI purposes in the generated document, expected to be appropriate for the document's target locale. These strings are plain text.
The object can contain the following members, all of them are strings and are optional (the generator will use defaults if needed):
  • SNAPBACK: the string for "Snapback" action (item title)
  • SNAPBACK_AND_SCROLL: the string for "Snapback & Scroll" action (item title)
  • ELEVATE: the string for "Elevate" action (item title)
  • RESET: the string for "Reset" action (item title)
  • ELEVATE_TO: the string for "Elevate to..." header (Elevate action dialog)
  • COPY_ITEM_NAME: the string for "Copy this item's LP FDOM full name to clipboard:" header ("#LP?" action dialog)
  • MORE: the string for "More... >>" label (item brief view)
  • TABLE_OF_CONTENTS: the string for "Table of contents" label (TOC section)
Boolean, optional (default = false). If set to true, then the generator will add source file names to the text fragments, it will help to remind the origin for a particular text piece. This mode is useful while proof reading and debugging of the draft document, especially as your project and the information across it grows sufficiently multi-file.
This renderer for ✖ ${LP_HOME}/lpgwrite-example produces documentation as single Markdown page.
This renderer uses extra member lpgwrite-example-render-md in the ✖ renders[] generation item configuration:
lpgwrite-example: {
	...
	renders: [
		{
			docModel: ...,
			renderer: "${LP_HOME}/lpgwrite-example-render-md" $, // paste verbatim!
			lpgwrite-example-render-md: {
				outFile: ...,
				emitToc: ...,
				header: ...,
				footer: ...,
				addSourceRef: ...
			}
		},
		...
	]
}
Members
Name
Description
✖ outFile
String. Path to the output document file (.md) to write, absolute or relative to the project root.
✖ emitToc
Boolean, optional (default = true). If true, then the renderer will add TOC section at the start of the document.
✖ header
String, optional. If specified, the renderer will prepend this string to the beginning of the document, before the TOC if any, so it is useful to make header and annotation. The string is raw Markdown code.
✖ footer
String, optional. If specified, the renderer will append this string a the end of the document. The string is raw Markdown code.
✖ addSourceRef
Boolean, optional (default = false). If set to true, then the generator will add source file names to the text fragments, it will help to remind the origin for a particular text piece. This mode is useful while proof reading and debugging of the draft document, especially as your project and the information across it grows sufficiently multi-file.
Members (detailed)
String. Path to the output document file (.md) to write, absolute or relative to the project root.
Boolean, optional (default = true). If true, then the renderer will add TOC section at the start of the document.
String, optional. If specified, the renderer will prepend this string to the beginning of the document, before the TOC if any, so it is useful to make header and annotation. The string is raw Markdown code.
String, optional. If specified, the renderer will append this string a the end of the document. The string is raw Markdown code.
Boolean, optional (default = false). If set to true, then the generator will add source file names to the text fragments, it will help to remind the origin for a particular text piece. This mode is useful while proof reading and debugging of the draft document, especially as your project and the information across it grows sufficiently multi-file.
This translator for ✖ ${LP_HOME}/lpgwrite-i18n-assist is a dummy translator. Its "translation" is the original string as is, with prepended [UNTRANSLATED-<language>] prefix, which can be used to search the interim file for updated and/or untranslated strings.
This translator uses the following translatorArgs under lpgwrite-i18n-assist member in the ✖ renders[] generation item configuration:
lpgwrite-i18n-assist: {
	...
	renders: [
		{
			docModel: ...,
			renderer: "${LP_HOME}/lpgwrite-i18n-assist" $, // paste verbatim!
			lpgwrite-i18n-assist: {
				translator: "${LP_HOME}/lpgwrite-i18n-assist-trn-none" $, // paste verbatim!
				items: [
				{
						...
						translatorArgs: { lang: ... }
					},
					...
				]
			},
		...
	]
}
The lpgwrite-i18n-assist-trn-none specific translatorArgs object, with members as follows:
Members
Name
Description
✖ lang
String. The language code. It will be substituted into [UNTRANSLATED-lang] prefix in the dummy translation strings.
Members (detailed)
String. The language code. It will be substituted into [UNTRANSLATED-lang] prefix in the dummy translation strings.
A certain file format that extraction readers must pre-compile the extracted input into. Along with FDOM, it is one of the few items that Logipard stipulates.
It's worth to note that it only applies to product of the extraction readers, not to actual user-facing source files they read. Logipard's builtin ✖ ${LP_HOME}/lpxread-basic: Basic language agnostic extraction stage reader is a quite thin wrapper over the input format, but an extraction reader does not have to consume a source as simple as that. For example, it is an ok problem statement to create an extraction reader that translates javadoc comments into Logipard input compliant form.
The input is split into a set of files corresponding to the input source files, preserving their folder structure and naming where possible (the .lpinput extension is appended to the original names), into a directory specified by ✖ outDir in the extract job item.
The LP input file is UTF-8 text file with markup tags of the format <#tag-name ... #> (tag name is alphanumeric with allowed -'s). Tags can be nested. Aside from tag components, the rest text format (plaintext, Markdown, HTML, or whatever) is opaque from LP input perspective, its interpretation is up to compilation and generation stages.
A typical piece of input can look like this:
<#LP ./dataStruct { <#./%title: FDOM data structure#> <#./%order: 1#>
The FDOM data structure explanation starts best with a visual example...

<#img ./lp-model.png #>

The model data consists of *nodes*. 
- blah
- blah blah

blah blah blah (see: <#ref dataStruct/parent-member#>, <#ref dataStruct/tagged-tag#> ).

blah blah total of 11 nodes.#>
It is possible to escape fragments of text by using <#~delimiter~ ... ~delimiter~#> boundaries. Delimiter is a sequence of any non-~ characters, including empty, and it must match for escape starting and finishing boundary. Everything between the boundaries is taken as verbatim plain text:
<#this-is-tag
	this is content data
	<#this-is-tag-too and this is content data too#>
	<#~~
	This is all plain text, <#even-this#>
	~~#>

	This is again content data <#and-a-tag#>

	<#~a~
	This is again plain text again, <#~~ and even this ~~#>
	~a~#>
#>
Markup tag names can start from - (it is also possible to write <-#tag-name ... #>) - these are assumed commented-out and do not have any effect on content and FDOM, although they still have to be of consistent format (correctly closed tags and escapes).
this is data <#with-markup-tag inside#>
and this is <-#dropped-tag#> data with no markup tag <#-this-is-dropped-tag-either and this <#is-not#>, but is ignored as inside of a dropped one#> really
is the same as:
this is data <#with-markup-tag inside#>
and this is  data with no markup tag  really
The markup tag names starting with LP, including actually <#LP ...#> are reserved for Logipard content feed and directives (these names are case-insensitive). Additionally, <# ... #> is treated as shortcut to <#LP ...#>. Also a reserved tag is <#ref ...#> (for Logipard item linking, see below), this name is also case-insensitive. All other tags are called custom tags, and their handling is up to compile model writer at ✖ Compilation stage . Their names MAY be case-sensitive, depending on the model writer implementation.
Logipard content feed is designed as tag <#LP itemName: ...content... #> or <# itemName: ...content...#> Item name can be followed by names of FDOM tags, optionally starting from #: <#LP itemName tagItemName1 tagItemName2 ...: ...#>, <#LP itemName #tagItemName1 tagItemName2 ...: ...#>, <#LP itemName #tagItemName1 #tagItemname2 ...: ...#>.
The content feed basically instructs to add the piece of content and attach the said FDOM tags to the item with given name. An input file is basically a number of content feeds.
The content feeds can go in sequence:
<#LP A: this goes to item A#>
<#LP B: this goes to item B#>
<#LP C: this goes to item C#>
<#LP D %tagDWithThis#> <-# if there is no content, just FDOM tags added, use of `:` is optional#>
<#LP A: this goes to item A
It is absolutely ok to make multiple feeds into the same item, content from them is appended. It is ok to do this even from different input files,
but then you should keep in mind that the order in which the input files are processed is not guaranteed.
#>
or be nested:
<#LP Outer: this goes to item Outer
	<#LP Inner: this goes to item Inner#>
	this again goes to item Outer
#>
The nested content feed is called [scope] digression (as a temporary digression from the 'current' item).
The scope digression can be made lingering by using { for delimiter instead of :. In this case, even after the digression finishes, the current scope remains until lingering digression closer tag is encountered:
<#LP A:	this goes to A
	<#LP B { this goes to B#>
	this still goes to B
	<#LP }
	this goes to A (note that, if closer contains a remainder content, then a line break or a markup tag, at least <-# comment#>, after `}`, is essential) #>
	this goes to A

	<#LP C/D { this goes to C/D#>
	<#LP E { lingering digressions can be nested#>
	this goes to E
	<#LP } #>
	this goes to C/D
	<#LP } #>
	this goes to A
#>
The item/tag name specified in content feed ultimately resolves to a full FDOM name (see ✖ Parent-member relation and names ). But using literal full names everywhere would be utterly impractical, so the names you deal with inside the LP markup tags are treated as partial (shortcuts), relying on variety on a number resolution rules.
Current item
The current item can be referred by a single dot ("current dir") as the starting name segment. It can be used as fragment of a name.
<#LP A: this goes to A
	<#LP.: this goes to A too#>
#>
<#LP B/C: this goes to B/C
	<#LP.: this goes to B/C too#>
	Doesn't make much sense to add content this way, but adding a tag is a reasonable use case:
	<#LP . %tag"B/C"WithThis #>
#>
<#LP D/E/F: this goes to D/E/F
	<#LP D/E/F/G: this goes to D/E/F/G#>
	this goes to D/E/F
	<#LP ./G: this goes to D/E/F/G again#>
#>
Single dot can be omitted at all:
<#LP A
	<#LP: this is in A (same as <#LP#>) #>
#>
If you intend to combine this syntax with adding tags, you will have to use # prefix:
<#LP #%tagToCurrentItem#>
Up-directory shortcuts
Using .., ... and so on ("up dir") starting segments refers to one, two, etc. levels above the current name level:
<#LP D/E/F/G: this goes to D/E/F/G
	<#LP ..: this goes to D/E/F#>
	<#LP ...: this goes to D/E#>
	<#LP ....: this goes to D#>
	<#LP .....: this goes to root item#>
#>
On that note, the input file content feeds are digressions in root item scope.
You technically can add content to root item scope, but it is quite meaningless and a bad style.
If the up-dir segment exhausts the levels of the current scope, but there is an outer nesting scope, it starts borrowing from there, then from next outer scope, etc.
<#LP A/B
	<#LP C/D {#>
		<#LP ..: this is in C#>
		<#LP ...: it might be in root, but there is outer scope to borrow from, so it is in A/B #>
		<#LP ....: it is in A #>
		<#LP .....: and only this is in root #>
	This is again in C/D
	<#LP } #>
	This is again in A/B
#>
Current-dir and up-dir names can be used as middle names, although it is quite a peculiar use case:
<#LP A/B
	<#LP ./C/D/..: . is A/B, ./C/D is A/B/C/D, ./C/D/.. is A/B/C, so in the end it is A/B/C #>
#>
Outer scope name shortcuts
If the initial name segment is not current-dir nor an up-dir, it does not necessarily mean a first-level segment shortname. The currently open scopes are looked up first, from current to outward, and if a match is found, that is taken as a starting scope.
<#LP A:
	<#LP B/C: this is B/C (not A/B/C !)
		<#LP D: this is indeed D#>
		<#LP C: this is B/C#>
		<#LP B: this is B#>
		<#LP A: B level is exhausted, but then there is A, so it is A#>
	#>
#>
Name resolution rules apply to names of the opening content feed item, tags and refrenced items:
<#LP A/B/C
	<#LP ./%tag: it is A/B/C/%tag #>
	<#LP A C/%tag: we are in A and tag it by A/B/C/%tag
		<#ref ./B#> - reference to A/B
	#>
#>
Note that for opening item name and its tags the scope in effect is still the outer scope, while for refs and digressions inside it is the digression item's scope.
Lingering digression closers with names
A lingering digression closer it normally closes the whole digression as it was opened:
<#LP A:
	<#LP B/C { #>
	This goes to B/C
	<#LP } #> <-#closes B/C#>
	This goes to A
#>
But it is possible to specify a short name to designate the sub-level to close - it will be first one that matches the name:
<#LP A:
	<#LP B/C/D { #>
	This goes to B/C/D
	<#LP } C #> <-#closes C, but leaves B (note the short name is on same line as `}`) #>
	This goes to B
	<#LP } #> <-# closes "remaining" part of the digression, which is B (we could also use <#LP } B#> with the same effect)#>
	This goes to A
#>
Instead of closing the named level, you can specify to stay at that level by appending a . segement:
<#LP A:
	<#LP B/C/D { #>
	This goes to B/C/D
	<#LP } C/. #>
	This goes to B/C (would go to B if we used "} C")
	<#LP } #>
	Again to A
#>
The "borrow outwards on levels exhaustion" rule works as well:
<#LP A:
	<#LP B/C { #>
	<#LP D/E { #>
	this goes to D/E
	<#LP } E #>
	this goes to D
	<#LP } B #>
	closed remaining D and the B/C, this goes to A
#>
but with one important caveat: you can only pop through levels open with lingering digression. Current non-lingering ones digressions form digression fence that bars the closer's way outward.
<#LP A:
	<#LP B/C { #>
	<#LP D/E:
	this goes to D/E, but note that "D/E" is open as non-lingering digression, and we are still within it
	THE FOLLOWING IS INCORRECT: <#LP } E #>
	Using "} D" or "} B" or "} C" or "} A" here is disallowed as well, as they pop through still-effective D/E.
	#>
	<#LP } B #>

	<#LP F/G {
	The same applies to inside of the lingering digression opener until it is finished.
	That is, the following is incorrect: <#LP } F #>
	#>
	but here, as the lingering digression opening is done, the following one is ok:
	<#LP } F #>
#>
This is a safeguard to make the syntax more resilient against unintentional loss of scope consistency.
Scope tracing
The name resolution rules are designed to behave in 'least-surprise' way, and should go quite intuitive with reading and writing. Nevertheless, in a questionable case you can insert <#LP-TRACE-WHERE [optional label]#> inline markup tag which will print the current scope in the location it is placed, along with optionally provided label and explanation of the name resolution.
The name resolution rules are in effect for name and tag specification in content feed headers, macro/alias specification ( ✖ Macros , ✖ Name aliasing ), inline references ( ✖ Inline references ) and inverse tagging ( ✖ Tagging and reverse tagging ).
Inline references to FDOM items are specified via <#REF item/name#> markup tag. The name specification obeys naming resolution rules ( ✖ FDOM item and tag names resolution ). References are supported at FDOM compilation stage on a built-in basis - FDOM user, such as generator, does not have to invent a custom tag for them.
It is possible to explicitly specify an "alt" text for the reference:
This is <#ref item/name: a link to item/name with alt text#>
Unspecified alt text is assumed empty. In fact, interpretation of alt text (or absence thereof) is up to FDOM user, such as a generator.
Adding FDOM tag(s) is done along with digression opening (<# name tag1 tag2 ...: ...content...#>), or, if adding to a currently scoped item, later in an auxiliaty sub-digression (<#. tag1 tag2 ...#>). But it also is possible to do the reverse thing - add the currenly scoped item as FDOM tag to some other item: <#LP-TAG-TO other-item-name#>.
You can include an input file like if its content was typed inline. This file is called module [input] file.
<#LP-INCLUDE lp-module-inc.lp-txt#>
or shorter...
<#LP-INC lp-module-inc.lp-txt#>
note that you specify only the file's original extension, no .lpinput suffix
Extractions from the module input files are a bit different from "main" input files: they have .lpinput-inc name suffix instead of .lpinput and are not picked automatically at compilation stage, as they are are assumed only parts of "main" files to be included manually. You can't include other "main" files (but it is possible to include an module file from another module file).
Preparation of module files is typically done in separate extract job items, which have ✖ forLPInclude flag set to true. It is also adviced to keep the module files under dedicated subdirectory (for example, if ✖ outDir for main files is "lp-extract.gen", then for module files it can be like "lp-extract.gen/lp-includes". This is for a good reason. Later, at compile stage, when handling <#LP-inc[lude] file-name#> directives, the file-name is interpreted in the following way:
  • if it starts from . or .., then it is path relative to directory of the processed input file (i. e. of one that contains the <#LP-INCLUDE#>) - but this is quite a rare use case,
  • otherwise, it tries to look for <includes-dir>/file-name[.lpinput-inc] via cascading lookup, that is, starting from directory of the processed input file, if not found there - then in its ../<includes-dir>/file-name[.lpinput-inc], then ../../<includes-dir>/file-name[.lpinput-inc], and so on, until found or reached <extracted-input-root> directory. That is, the strategy similar to what Node.JS does on require(filename). This is the recommended method of arranging and using module files. For example, you can place a module file as a <extracted-input-root>/<includes-dir>/common.lp-txt[.lpinput-inc], and then include it with <#LP-INCLUDE common.lp-txt#> from any input file under <extracted-input-root>/**.
Note that at extraction stage you can specify the outDir-s much as you like, but you should take care to have them matching directories that will be <extracted-input-root> and <includes-dir> at compilation stage (these are specified by ✖ inRootDir and ✖ lpIncLookupDirName , respectively). If multiple extract jobs are targeting the same compile job, then their outDir-s must be consistent with the compile job's inRootDir and lpIncLookupDirName.
Module files typically contain macro and alias definitions rather than actual immediate content.
Names can be aliased. It can be convenient for giving more convenient aliases to longer names (like M for the next-to-root domain item name), or to quickly move FDOM fragments to other actual locations without changing the sources. Syntax for alias definition is: <#LP-ALIAS new-name: old-name#> After that, new-name will become an alias to old-name (the old-name can still be used on its own).
Some rules to remember regarding aliases:
  1. The alias is only in effect at compile-time and only for that specific input file (that spans to <#LP-INCLUDE ...#>-d fragments, but on per-include basis, not per the module file as is). There is no concept of aliasing in FDOM, and in the actual compiled output all the names go resolved.
  2. Alias resolution is in effect for name starting parts, but is done after applying name resolution rules. That is:
<#LP-ALIAS A/B: C#>
<#LP A/B: this goes to C: A/B #>
<#LP A/B/D: this goes to C/D, because starting part A/B aliases C#>
but:
<#LP A/B/../D: this goes to A/D, because A/B/../D resolves to A/D, and A is not aliased #>

also:
<#LP A: this goes to A
	<#LP ./B: this goes to C, because effective resolved name is A/B, which is aliased#>
#>
The resolved name prior to application of alias is also called literal name, and is referred so in <#LP-TRACE-WHERE#> output.
  1. Alias can be redefined - the redefinition comes into effect in the input stream order:
<#LP-ALIAS A/B: C#>
<#LP A/B: this goes to C#>
<#LP-ALIAS A/B: D#>
<#LP A/B: this goes to D#>
Be careful however, that specification of the redefined alias only spans to the last name segment, and can be affected by aliasing of the ones that precede it:
<#LP-ALIAS A: B#>
<#LP-ALIAS A/C: D#> A/C and B/C refer to D (here A is literal aliased name, B is unaliased actual name)
<#LP-ALIAS A/C/E: F#> A/C/E and B/C/E refer to D/E
<#LP-ALIAS A: G#>
<#LP-ALIAS A/C: H#> A/C and G/C now refer to H, A/C/E refers to H/E, B/C still refers to D!
The actual items at this point are B, D, D/E, F, G
You can consider alias definition as creating a symbolic link in the items namespace 'directory', working in similar logic.
  1. If an actual name (directly or via an alias) was used in the current input file in one of the following ways:
  • as FDOM tag or tag target,
  • had any content added under it,
  • used as effective name for macro ( ✖ Macros ) or alias name,
  • was target of a <#ref ...#>,
  • used as intermediate path component for any of the above,
then it can no longer be used for alias in the current input file:
<#LP A/B: some content#>
<#LP-ALIAS A/B: C#> this will be ignored and flag a warning
<#LP-ALIAS A: D#> and this too
the restriction doesn't affect unused names under A/B though:
<#LP-ALIAS A/B/C: C#>
and the A/B itself still can be alias target:
<#LP-ALIAS E: A/B#>
<#LP E: this goes to A/B#>
<#LP-ALIAS E: C#> E can be redefined, since it is an alias from the very beginning
Note that in the example above, after definition of E as of an alias, it is no longer possible to use/refer an item with actual E name, or any of its sub-items, in this input file - E always refers to an item currently aliased by E. So this restriction is introduced to prevent confusing interference between aliases and actual names.
You can define some node names as macros to add predefined set of tags and content into arbitrary nodes by adding the macro pseudo-nodes as tags or digressions.
<#LP-MACRO Mac %tag1 %tag2: macro-content#>
Adding the macro as a tag to a node is the same as adding the set of tags and the content in place, at start of the "tagged" node:
<#LP test-node Mac: test content#>
the same as:
<#LP test-node %tag1 %tag2: macro-content test content#>
or:
<#LP test-node: test content <#. Mac#> other test content #>
the same as:
<#LP test-node: test content <#. %tag1 %tag2: macro-content#> other test content #>
or use a macro directly inline, which is the same as using it as a tag (i. e. expanding macro content at start of the containing node):
<#LP test-node: test content <#Mac#>#>
the same as:
<#LP test-node %tag1 %tag2: macro-content test content#>
Several macros are expanded in the order of usage (it matters for content adding ones).
Macros can contain arbitrary LP markup tags:
<#LP-MACRO %is-a-class %class: <#LP-TAG-ON classes#> <#LP ./%title: Class title: #> #>
...
<#LP ClassA %class: this is class A, and we added it to <#ref classes#> by using a macro
	<#./%title: A#> <-#it will result in %title = "Class title: A"#>
#>
Technically you can even use LP-ALIAS or even other LP-MACRO, although it generally doesn't make much sense, and you should be careful with these if you decide to use them after all. Keep in mind that the name scope inside the macro (i. e. where <# . #> refers, or where name resolution lookup starts) is the node where it is being inserted, not the macro itself.
<#LP-MACRO where-am-I: <#LP-TRACE-WHERE#>#>
<#LP A where-am-I: inside A#>
<#LP B where-am-I: inside B#>
Similarly to LP-ALIAS ( ✖ Name aliasing ), you can't define macro under same effective full name that has been used in the current input file in one of the following ways:
  • as FDOM tag or tag target,
  • had any content added under it,
  • used as effective name for macro or alias name,
  • was target of a <#ref ...#>,
  • used as intermediate path component for any of the above,
Additionally, it is not correct to use an effective full name that refers to macro as initial path for any sub-nodes:
<#LP-MACRO Mac: macro content#>
<#LP Mac/I: may not work as you expect#>
Similarly to aliases, macros are only in effect at compile time of particular input file - they all go resolved to the actual FDOM output.
Document program for ✖ ${LP_HOME}/lpgwrite-example is a list of instructions that describe what items to select for emitting into the document page, how to arrange the information in them, and building of the context for this operation. Organizationally, the program specifies one or more document models (definition of set of items to include and their presentation details) that are then referred to from renderer specifications ( ✖ renders[] ). In the configuration file, the document program is structured as JSON/LPSON array of commands, which are described below in more details.
A document program may look like:
[
	{ nameAlias: "M", name: "domain.logipard" },
	{
		docModel: {
			{
				name: "DocMain",
				query: [
					...
				],
				sort: { ... }
			}
		},
		forEachItem: [
			...
		]
	},
	{
		docModel: {
			{
				name: "DocReadme",
				query: [
					...
				],
				sort: { ... }
			}
		},
		forEachItem: [
			...
		]
	},
	...
]
Many of the commands involve specification of conditions, queries, collections, and their aliases, in the terms of ✖ FDOM querying . In the documentation program, these are represented as JSON/LPSON objects, in the way compatible with ✖ Suggested compiled FDOM reader interface , as stated below.
Any of the objects listed in ✖ <Condition> .
Will be referred in the following descriptions via {...condition} placeholder.
Any of the objects listed in ✖ <CollectionSpec> , except for ✖ <Item> and ✖ <Collection> options, as these have no counterparts in JSON/LPSON environment.
Will be referred in the following descriptions via {...collection} placeholder.
Any of the objects, or array thereof, listed in ✖ <QuerySpec> , except for ✖ <Query> option, as it has no counterpart in JSON/LPSON environment. An array is intepreted as composite query, with the components applied in the order. Initial collection for the query depends on context and will be explained in place.
Will be referred in the following descriptions via {...query} placeholder.
In a number of contexts that require specification of a collection, the document program also enables to specify sorting to determine the order in which the collection will be emitted/presented. The sorting spec format is as follows:
{
	byMember: "member-name",
	keyFormat: "lexical|natural",
	order: "asc|desc"
}
Will be referred in the following descriptions via {...sort} placeholder.
Members
Name
Description
✖ byMember
String. Specifies short name of the member to use as a sorting key. The key consists of the member's content which is interpreted as plain text with trimmed leading and trailing whitespaces. The member is assumed to contain no nested LP markup, otherwise the actual key contains an unguaranteed value. Key comparison is case sensitive.
✖ keyFormat
String, optional (default is lexical). Can be lexical or natural:
  • lexical: the keys are compared as strings (using string lexicographical comparison).
  • natural: the keys are split into and compared lexicographically as sequences of numeric and non-numeric fragments, where numeric/numeric sequence segments are compared as numbers, and non-numeric/numeric and non-numeric/non-numeric are compared as strings. I. e., 1.2.3-a and 1.10-z are compared as [1, ".", 2, ".", 3, "-a"] and [1, ".", 10, "-z"], the first differing segments are 2 and 3, which are numeric/numeric case, the 2 is less, so 1.2.3-a is less than 1.10-z. If the key starts from + or - followed by a digit, this +/- counts as part of the number of the first segment, which is considered numeric.
✖ order
String, optional (default is asc). Can be asc (for ascending sorting order) or desc (for descending sorting order).
Members (detailed)
String. Specifies short name of the member to use as a sorting key. The key consists of the member's content which is interpreted as plain text with trimmed leading and trailing whitespaces. The member is assumed to contain no nested LP markup, otherwise the actual key contains an unguaranteed value. Key comparison is case sensitive.
If the key member is absent, the item is keyless. The keyless items are placed in unspecified order after all of the sorted items.
String, optional (default is lexical). Can be lexical or natural:
  • lexical: the keys are compared as strings (using string lexicographical comparison).
  • natural: the keys are split into and compared lexicographically as sequences of numeric and non-numeric fragments, where numeric/numeric sequence segments are compared as numbers, and non-numeric/numeric and non-numeric/non-numeric are compared as strings. I. e., 1.2.3-a and 1.10-z are compared as [1, ".", 2, ".", 3, "-a"] and [1, ".", 10, "-z"], the first differing segments are 2 and 3, which are numeric/numeric case, the 2 is less, so 1.2.3-a is less than 1.10-z. If the key starts from + or - followed by a digit, this +/- counts as part of the number of the first segment, which is considered numeric.
In most cases the most fitting comparison method is natural - it correctly handles such keys as:
  • strings that follow fixed pattern with inclusion of numbers, like item1, item2, ...item10, ...,
  • integer and decimal point containing numbers (with no exponents), like 1, -2, 3.14,
  • version numbers, like 1.0.3
String, optional (default is asc). Can be asc (for ascending sorting order) or desc (for descending sorting order).
These commands that create conditions, query and collection aliases, in the terms of ✖ FDOM querying . They can be used at program root level to define shared context for all models (for example, a conventional alias for project domain name), or be used inside document model specification (see below).
Sets a named alias for an item. The instruction syntax is: { nameAlias: "ItemAliasName", name: "name-string"}, where name-string is a full FDOM name (possibly starting with a previously defined alias). Item alias name here must be a valid FDOM shortname.
Counterpart of ✖ .nameAlias(aliasName, item) .
Sets a named alias for the query. The instruction syntax is: { queryAlias: "QueryAliasName", query: {...query}}.
Counterpart of ✖ .queryAlias(aliasName, ...querySpecs) .
It is recommended to not have same names for query aliases at root program level and inside the document models to avoid unexpected behaviours.
Sets a named alias for the condition. The instruction syntax is: { conditionAlias: "?CondAliasName", condition: {...condition}} (using ? prefix is an optional convention).
Counterpart of ✖ .conditionAlias(aliasName, condSpec) .
It is recommended to not have same names for condition aliases at root program level and inside the document models to avoid unexpected behaviours.
Sets a named alias for the condition. The instruction syntax is: { collectionAlias: "CollAliasName", collection: {...collection}}. It is a permament alias that will be shared by the subsequent queries, unlike the query local alias that only spans the rest of current query ( ✖ Set local collection alias ["alias ..."] ).
Counterpart of ✖ .collectionAlias(aliasName, ...collectionSpecs) .
It is recommended to not have same names for collection aliases at root program level and inside the document models to avoid unexpected behaviours.
The documentation program can specify one or more document models. The specification instruction is as follows:
{
	docModel: {
		name: "DocumentModelName",
		rootItems: {
			query: {...query},
			sort: {...sort}
		},
		excludeUnder: {...collection}, // optional
		whitelistUnder: {...collection} // optional
	},
	forEachItem: [
		... // list of item readable content specification instructions
	]
}
Members
Name
Description
✖ forEachItem
Specifies the information fragments to include into the readable presentation of each item included into the model. Each instruction can be either one of listed in ✖ Context definition commands (be sure you don't assign aliases with conflicting names), or one of the instructions listed in this section.
✖ docModel
Specify the document model name and the set of FDOM items to include into this model.
Members (detailed)
Specifies the information fragments to include into the readable presentation of each item included into the model. Each instruction can be either one of listed in ✖ Context definition commands (be sure you don't assign aliases with conflicting names), or one of the instructions listed in this section.
An instruction that consists of an integral JSON/LPSON string constant. Can have a number of meanings depending on the string format:
  • "member-field-name": any FDOM shortname (note that strings starting from %% and # are reserved for other instructions and don't fall under this case). It tells to emit the immediate content of the given member field of the current item, with no its (sub-)member items, or nothing if there is no such member.
  • "#text:...arbitrary string...": emit the plain text that follows #text: prefix, in general inline text style.
  • "%%title": emit the current item's human readable title (content of its %title member, or the item's short name if no %title is available), in a distinguished header style (or as an interactive title element if applicable to the renderer). In general, not required to do this explicitly - a title is automatically emitted, unless the item has a private name (shortname starting from #).
  • "%%brief": emit the brief part of the item's direct content (its first paragraph, unless it is code block or a list element), in general inline text style.
  • "%%debrief": emit the part of the item's direct content remaining after %%brief, in general inline text style. "%%brief" instruction followed by "%%debrief" instruction emit the full item's direct content.
  • "%%refTitle": emit the current item's title, in general inline text style, wrapped into a on-page link (Logipard reference if applicable to the renderer). This instruction makes little sense as is, because links from an item's direct content to the item itself are inherently defunct - it is typically used in conjunction with #item (see below).
  • "%%more-start": this instruction marks location where the content of item viewed in brief mode finishes. Everything below should only be visible in or after switching to full mode. This instructon can be only used once per the forEachTeam section.
  • "%%mark-for-toc": this instruction indicates that the current item should be included into table-of-contents tree (if applicable to the renderer). By default, an item is not marked for TOC, and you should take care to include only items significant enough, otherwise the TOC can become overburdened. It is not necessary to mark every level in the branch - the tree is contracted to the marked items only (i. e., if only item and its grandparent item are marked, then in the TOC item will appear as direct member of its grandparent).
  • "#item:...spec...": any of the above options, except for %%more-start and %%mark-for-toc, prefixed by #item: - e. g. #item:%%title, #item:%%refTitle, #item:fieldName, etc. It is not allowed as a standalone instruction, only inside emitAsItems... instruction (see below), and it refers to the current item of the iterated sub-collection.
A block of instructions whose output should be placed inside a titled section, which should be emitted in a distinguished sub-header style (less distinctive than one of an item title). The instruction is an object as follows:
{
	section: "Section Title",
	content: [
		...
	]
}
Members
Name
Description
✖ section
String. The plain text section title.
✖ content[]
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will emit the section's content.
Members (detailed)
String. The plain text section title.
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will emit the section's content.
Perform query on a given collection and set permament alias for the resulting collection (it will replace earlier defined alias, if any). The instruction is an object as follows:
{
	on: {...collection},
	query: {...query},
	as: "ResultAlias"
}
Notion of a current item is optional to this command, so it can be also used outside docModel.
Members
Name
Description
✖ on
The collection to start with (as in ✖ Collections ). Additionally, if used inside docModel, a "%%self" alias is defined, allowed for the on field or inside the query - it refers to the current item.
✖ query
The query to perform, with on as initial current collection.
✖ as
String. The alias to set for the resulting collection (will replace earlier defined one and will transfer to next instructions, including forEachItem iterations for next items, so keep this in mind to avoid order dependent effects).
Members (detailed)
The collection to start with (as in ✖ Collections ). Additionally, if used inside docModel, a "%%self" alias is defined, allowed for the on field or inside the query - it refers to the current item.
The query to perform, with on as initial current collection.
String. The alias to set for the resulting collection (will replace earlier defined one and will transfer to next instructions, including forEachItem iterations for next items, so keep this in mind to avoid order dependent effects).
Perform a block of instructions only if the given collection is not empty. The instruction is an object as follows:
{
	ifNotEmpty: {...collection},
	then: [
		...
	]
}
Notion of a current item is optional to this command, so it can be also used outside docModel.
Members
Name
Description
✖ ifNotEmpty
The collection to check (as in ✖ Collections ).
✖ then[]
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will be performed if ifNotEmpty collection is not empty.
Members (detailed)
The collection to check (as in ✖ Collections ).
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will be performed if ifNotEmpty collection is not empty.
Perform a block of instructions only if the given condition on the current item is true. The instruction is an object as follows:
{
	ifCondition: {...condition},
	then: [
		...
	]
}
Members
Name
Description
✖ ifCondition
The condition to check (as in ✖ Conditions ).
✖ then[]
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will be performed if ifCondition holds.
Members (detailed)
The condition to check (as in ✖ Conditions ).
Array. A block of instructions (same ones as applicable under ✖ forEachItem ) that will be performed if ifCondition holds.
Emit the items in the given collection as a table, with a line per collection item, the columns and the column headers as given. The instruction is an object as follows:
{
	with: {...collection},
	sort: {...sort},
	emitAsItemsTable: [
		[ "column-header-spec", "column-content-spec" ],
		...
	]
}
Members
Name
Description
✖ with
The collection (as in ✖ Collections ).
✖ sort
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this table.
✖ emitItemsAsTable[]
Array. Specification of table columns. Each element specifies the column, in left to right order, and is a two-element sub-array:
Members (detailed)
The collection (as in ✖ Collections ).
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this table.
Array. Specification of table columns. Each element specifies the column, in left to right order, and is a two-element sub-array:
Members
Name
Description
✖ [0]
String. The column title. Is interpreted as in ✖ String (text, field refs, etc.) .
✖ [1]
String. The column content. Is interpreted as in ✖ String (text, field refs, etc.) , where #item: refers to the element of collection assigned to this line.
Members (detailed)
String. The column title. Is interpreted as in ✖ String (text, field refs, etc.) .
String. The column content. Is interpreted as in ✖ String (text, field refs, etc.) , where #item: refers to the element of collection assigned to this line.
Emit the items in the given collection as a list, with a line per collection item, made up of concatenated fragments as specified. The instruction is an object as follows:
{
	with: {...collection},
	sort: {...sort},
	emitAsItemsList: [ "fragment-1-spec" [, "fragment-2-spec", ...] ]
}
Members
Name
Description
✖ with
The collection (as in ✖ Collections ).
✖ sort
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
✖ emitItemsAsList[]
Array. Specification of fragments to append to form the list line, in the listed order. Each fragment is a string interpreted as ✖ String (text, field refs, etc.) , where #item: refers to the element of collection assigned to this line.
Members (detailed)
The collection (as in ✖ Collections ).
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
Array. Specification of fragments to append to form the list line, in the listed order. Each fragment is a string interpreted as ✖ String (text, field refs, etc.) , where #item: refers to the element of collection assigned to this line.
Emit the items in the given collection as a sequence of nested sub-items (each one independently formatted according to ✖ forEachItem on its own), assuming these are primary locations for the items. The instruction is an object as follows:
{
	with: {...collection},
	sort: {...sort},
	emitAsOwnItems: "basic|full"
}
By lpgwrite-example convention, an item can be emitted at multiple locations in the document, but only one of them is treated as "home" location. The document format can assume it, for example, the actual item's information site, and just put the links to it into all the others (but as well can ignore this hint).
If there are multiple locations for an item per emitAsOwnItems/emitAsExtItems, lpgwrite-example chooses one of them as home location, the ones from emitAsOwnItems have more priority for this choice.
Members
Name
Description
✖ with
The collection (as in ✖ Collections ).
✖ sort
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
✖ emitAsOwnItems
String. Specifies the suggested information mode for the items emitted per this instruction. Can be either of...
Members (detailed)
The collection (as in ✖ Collections ).
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
String. Specifies the suggested information mode for the items emitted per this instruction. Can be either of...
Members
Name
Description
✖ basic
Only brief part of the item information should be displayed.
✖ full
Full item information should be displayed.
Members (detailed)
Only brief part of the item information should be displayed.
Full item information should be displayed.
Emit the items in the given collection as a sequence of nested sub-items (each one independently formatted according to ✖ forEachItem on its own), assuming these are secondary locations for the items. The instruction is an object as follows:
{
	with: {...collection},
	sort: {...sort},
	emitAsExtItems: "basic|full"
}
By lpgwrite-example convention, an item can be emitted at multiple locations in the document, but only one of them is treated as "home" location. The document format can assume it, for example, the actual item's information site, and just put the links to it into all the others (but as well can ignore this hint).
If there are multiple locations for an item per emitAsOwnItems/emitAsExtItems, lpgwrite-example chooses one of them as home location, the ones from emitAsOwnItems have more priority for this choice.
Members
Name
Description
✖ with
The collection (as in ✖ Collections ).
✖ sort
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
✖ emitAsOwnItems
String. Specifies the suggested information mode for the items emitted per this instruction. Can be either of...
Members (detailed)
The collection (as in ✖ Collections ).
Optional. The sorting specification (as in ✖ Sorting specification ) to use on the with collection for this list.
String. Specifies the suggested information mode for the items emitted per this instruction. Can be either of...
Members
Name
Description
✖ basic
Only brief part of the item information should be displayed.
✖ full
Full item information should be displayed.
Members (detailed)
Only brief part of the item information should be displayed.
Full item information should be displayed.
Print the given collection with an optional given label. Intended for debug purposes. The instruction is an object as follows:
{
	collDump: {...collection},
	label: "labelSpec"
}
Members
Name
Description
✖ collDump
The collection (as in ✖ Collections ).
✖ label
Optional. String specifying the label. Is interpreted as in ✖ String (text, field refs, etc.) .
Members (detailed)
The collection (as in ✖ Collections ).
Optional. String specifying the label. Is interpreted as in ✖ String (text, field refs, etc.) .
Specify the document model name and the set of FDOM items to include into this model.
Members
Name
Description
✖ name
String. Name of the model, will be used to refer to this model from renderer config (see ✖ docModel ).
✖ rootItems
The initial slice to start the inclusion set from. Root items set is obtained by a query and added to the list of items. The set will then be expanded to include all items that are referenced ( ✖ Inline references ) from, or will be emitted as sub-items of the items already included - all the way down the tree. This set can be then trimmed down (see ✖ excludeUnder , ✖ whitelistUnder ).
✖ excludeUnder
The collection of root items to recursively exclude from the initial set after ✖ rootItems . If excludeUnder collection is specified, then, whenever an item is in FDOM membership tree of one of these items, it is dropped from document and from any collection based lists/tables, and inline links to it are defunct.
✖ whitelistUnder
The collection of root items to whitelist in the initial set after ✖ rootItems . If whitelistUnder collection is specified, then, unless an item is in FDOM membership tree of one of these items, it is dropped from document and from any collection based lists/tables, and inline links to it are defunct.
Note that in the document model the order in which items of a set are emitted into resulting document is defined, and is specified where appropriate (see below).
Members (detailed)
String. Name of the model, will be used to refer to this model from renderer config (see ✖ docModel ).
The initial slice to start the inclusion set from. Root items set is obtained by a query and added to the list of items. The set will then be expanded to include all items that are referenced ( ✖ Inline references ) from, or will be emitted as sub-items of the items already included - all the way down the tree. This set can be then trimmed down (see ✖ excludeUnder , ✖ whitelistUnder ).
Members
Name
Description
✖ query
The query to deliver the root items. Initial current collection for this query is empty, so in order to make sense you should start it from { with: ... } basic query (see ✖ <QuerySpec> ).
✖ sort
The sort specification to determine relative order of the root items in the resulting document. Note that it is top-level order only: any sub-items will be emitted after containing item and before its next sibling item, and the ordering within sub-items is specified by the respective emitting instructions.
Members (detailed)
The query to deliver the root items. Initial current collection for this query is empty, so in order to make sense you should start it from { with: ... } basic query (see ✖ <QuerySpec> ).
The sort specification to determine relative order of the root items in the resulting document. Note that it is top-level order only: any sub-items will be emitted after containing item and before its next sibling item, and the ordering within sub-items is specified by the respective emitting instructions.
The collection of root items to recursively exclude from the initial set after ✖ rootItems . If excludeUnder collection is specified, then, whenever an item is in FDOM membership tree of one of these items, it is dropped from document and from any collection based lists/tables, and inline links to it are defunct.
This option is useful if you need to exclude certain item trees from the document in a hard way, and it is not practical or reliable to achieve this by adjusting ✖ rootItems .
The excludeUnder is the inverse to ✖ whitelistUnder , and generally they should not be used together. However, if they are, excludeUnder is applied first.
The collection of root items to whitelist in the initial set after ✖ rootItems . If whitelistUnder collection is specified, then, unless an item is in FDOM membership tree of one of these items, it is dropped from document and from any collection based lists/tables, and inline links to it are defunct.
This option is useful if you are generating a document on a limited subscope of the FDOM, and need to guard against leaking information from unnecessary scope because of an occasional reference.
The whitelistUnder is the inverse to ✖ excludeUnder , and generally they should not be used together. However, if they are, excludeUnder is applied first.
A "builtin" program for creation of a generic program documentation page, used to generate Logipard documentation itself, and suitable for quickstart. It is intended for use via LPSON file facility (see ✖ file(...): embedded value from JSON/LPSON file ) with some added parameters, as shown below, and defines model named DocMain:
	...
	lpgwrite-example: {
		...,
		program: file("${LP_HOME}/lpgwrite-example-docprg.lpson" $, {
			docprgPrologue: [ ... ], // instructions to inject at the start
			docRootItems: {...query},
			LS_EXTENDS: "Extends (is a)",
			LS_MEMBERS: "Members",
			LS_NAME: "Name",
			LS_DESCRIPTION: "Description",
			LS_MEMBERS_FROM_EXTENTS: "Members from extents",
			LS_ARGUMENTS: "Arguments",
			LS_RETURNS: "Returns:",
			LS_ERRORS: "Errors:",
			LS_MEMBERS_DETAILED: "Members (detailed)",
			LS_MEMBERS_FROM_EXTENTS_DETAILED: "Members from extents (detailed)",
			LS_ARGUMENTS_DETAILED: "Arguments (detailed)",
			LS_NOTES: "Notes",
			LS_PROPERTIES: "Properties",
			LS_PROPERTIES_FROM_EXTENTS: "Properties from extents",
			LS_METHODS: "Methods",
			LS_METHODS_FROM_EXTENTS: "Methods from extents"
		}),
		renders: [
			{
				docModel: "DocMain",
				renderer: "${LP_HOME}/lpgwrite-example-render-html" $,
				...
			},
			{
				docModel: "DocMain",
				renderer: "${LP_HOME}/lpgwrite-example-render-md" $,
				...
			},
			...
		]
	}
It also adds several more comprehensions on top of the FDOM model, interpreting certain members and tags as domain hints for a generic programming language. More details described below.
The first paragraph of the item's content is considered its brief description. It is the part visible in the item brief view mode, along with list of the item's most essential data (specifically ✖ %extends: extended objects list , ✖ %member: member items , ✖ %arg: argument items , ✖ %return: returned value description , ✖ %errors: errors description ).
The list of objects (i. e. their documentation item names) that the current documented object 'extends' in some subject langage meaning, such as base classes or data records. The members ( ✖ %member: member items ), methods ( ✖ %method: method items ) and properties ( ✖ %property: property items ) of the extended objects will be added to the object's documentation in special secondary sections. The list is specified by adding the 'extended' items as tags to the %extends or %.extends member (the member itself should not contain any content except for the added tags).
The list of objects (i. e. their documentation item names) that are 'members' of the current documented object in some subject langage meaning. For example, structure members. The member objects are member (in FDOM meaning) items with added %member or %.member tag. It is possible to combine it with %property/%.property or %method/%.method tags.
The list of objects (i. e. their documentation item names) that are 'arguments' of the current documented object in some subject langage meaning. For example, function or constructor arguments. The argument objects are member (in FDOM meaning) items with added %arg or %.arg tag.
Description of what is a 'return value' of the current documented object in the subject language meaning. For example, if the object is a function and returns a value. Content of %return member is assumed to be such description and is appended in the dedicated documentation section. %return member is assumed to have no title.
Description of what are possible 'errors' within the current documented object in the subject language meaning. For example, list of possible errors thrown from an object. Content of %errors member is assumed to be such description and is appended in the dedicated documentation section. %errors member is assumed to have no title.
There is only one member allocated for all 'errors', but you can leverage flexibility of FDOM here: the %errors member can have sub-members, even marked with %member.
lpgwrite-example-docprg assumes ordering of the nested items according to content of %order member, using natural ordering of the string.
By default, lpgwrite-example-docprg only includes items to TOC that are not tagged with %arg, %member, %method or %property and have "public" shortnames (not starting from % or #). If you have such an item and want to force it in TOC anyway, add %for-toc tag to this item.
The list of objects (i. e. their documentation item names) that are 'methods' of the current documented object in some subject langage meaning. For example, language object methods. The methods objects are member (in FDOM meaning) items with added %method or %.method tag. It is possible to combine it with %member/%.member or %property/%.property tags.
Note that for languages that allow method overloading, such as Java, C++, partially JS, it may be not practical to use neither literal short names nor their full signature-enabled names for a FDOM name - most likely you will want to use some mangled name for FDOM and %title to specify the full name for human readability.
The list of objects (i. e. their documentation item names) that are 'properties' of the current documented object in some subject langage meaning. For example, language object properties. The property objects are member (in FDOM meaning) items with added %property or %.property tag. It is possible to combine it with %member/%.member or %method/%.method tags.
Any additional information fragments to append to the content before immediate members, probably with nested sub-items, under subsection named 'Notes'. The intended usage is by adding anonymous members to item/%note like this:
#LP main-item {
Main item content
#LP ./%note/~ {
#LP note 1 content (in-item), with member
#LP ./note-1-member: note 1 member
#LP }
<#LP ./%note/~: note 2 content (in-item), simple#>
More main item content
#LP }
#LP main-item/%note/~: note 3 content (off-item)
The notes content is appended after the main item's content, but before the item's members, in flat manner under the 'Notes' subsection.
Notes can be particularly helpful if they are added from different locations that the item primary content, probably even different source files. This way you can add any useful comments related to particular item on-site, and then it will be collected in one place as the item's notes section.
Additional advantage of splitting the notes into several objects under %notes is that there are more options for their ordering control. Appending directly to item's content from multiple sources does not guarantee the resulting order of fragments, and can even disrupt your intended convention of what will be brief part of information for this item. On the other hand, 'Notes' section location is well-defined, and members of %notes under it obey %order hints ( ✖ %order: ordering control ).
Any note item can contain sub-items, which will be displayed within this item as usual (with titles etc.), but use these with caution, as anonymous member and its sub-members has no safe full name they can be referenced by. You can define an alias, but it will be only in effect within the same input file.
The %note's own direct content and non-anonymous members are not used. For purpose of concern separation, there is a separate capability for this with a different intended use case - see ✖ extra .
In the %extra member you can specify content that will be displayed after the item's main content in inline manner, like it was written at the end of the item's content itself. More specifically, it will behave like an extra item with no title inserted before the item's detailed members section (and before 'Notes' section, if available), so it will look like continuation to the item's own content. The intended usage is to add content and members to item/%extra like this:
#LP main-item {
Main item content
#LP ./%extra: main-item's direct extra content
#LP ./%extra/A %member: main-item's extra member A (in-item)
More main item content
#LP ./ownMember %member: main-item's own member
#LP }
#LP main-item/%extra/B %member {
main-item's extra B content (off-item)
#LP ./extra-B-member: extra item B member
#LP }
This example would result in the following visible structure of the main-item's section:
# main-item
Main item content
Members:
 ownMember | main-item's own member
// end of main-item's brief info
More main item content
 // data from %extra starts here
 main-item's direct extra content
 Members: // of %extra
 A | main-item's extra member A (in-item)
 B | main-item's extra member B content (off-item)
 Members (detailed): // of %extra
 # A
 main-item's extra member A (in-item)
 # B
 main-item's extra member B content (off-item)
  # extra-B-member
  extra item B member
 // data from %extra ends here
Members (detailed): // of main-item
 # ownMember
 main-item's own member
Primary purpose of %extra is display control for members in edge cases.
lpgwrite-example-docprg places item's direct members tagged with %member, %arg, %return, %errors, and the list of their counterparts from extents, in the end of item's brief description and before the remaining part of the content. In some cases, this can disrupt the information flow (such as a fenced code fragment presenting the item's general look, which should better go before the members list). In order to workaround this inconvenience, you can move the list of members, args etc. from the item itself to members of its ./%extra item. While they will still look "inline", they are logically part of a different item and will not be parts of the main item's brief display flow.
The parameters to lpgwrite-example-docprg are provided via added context vars in the LPSON file operator:
Members
Name
Description
✖ docprgPrologue
The array of instructions to inject at the very start of the doc program. Typically definition of aliases to be used in ✖ docRootItems .
✖ docRootItems
The rootItems section (see ✖ rootItems ) of the generated model (DocMain). This object will be assigned to the rootItems entirely as is, with no wrappings and patching, so the user should not rely on any defaults here.
✖ localization
The group of predefined titles to use in the generated page. Moved out to a parameter in order to make them localizeable at this point. This object is a dictionary of strings, with member names denoting meaning for each string:
Members (detailed)
The array of instructions to inject at the very start of the doc program. Typically definition of aliases to be used in ✖ docRootItems .
The rootItems section (see ✖ rootItems ) of the generated model (DocMain). This object will be assigned to the rootItems entirely as is, with no wrappings and patching, so the user should not rely on any defaults here.
The group of predefined titles to use in the generated page. Moved out to a parameter in order to make them localizeable at this point. This object is a dictionary of strings, with member names denoting meaning for each string:
  • LS_EXTENDS: title for section with a list of items from %extends list (i. e. ones tagged on %extends member)
  • LS_MEMBERS: title for section with table of items tagged as %member
  • LS_MEMBERS_FROM_EXTENTS: title for section with a list of %member marked items defined inside the items from %extends list, all the way deep through the extendeds tree
  • LS_PROPERTIES: title for section with table of items tagged as %property
  • LS_PROPERTIES_FROM_EXTENTS: title for section with a list of %property marked items defined inside the items from %extends list, all the way deep through the extendeds tree
  • LS_METHODS: title for section with table of items tagged as %method
  • LS_METHODS_FROM_EXTENTS: title for section with a list of %method marked items defined inside the items from %extends list, all the way deep through the extendeds tree
  • LS_ARGUMENTS: title for section with a table of items tagged as %arg
  • LS_NAME: title for table column with item name (1st)
  • LS_DESCRIPTION: title for table column with item description (2nd)
  • LS_RETURNS: title for section with contents of %return member
  • LS_ERRORS: title for section with contents of %error member
  • LS_MEMBERS_DETAILED: title for section with full documentations for %member marked items
  • LS_MEMBERS_FROM_EXTENTS_DETAILED: title for section with full documentations for %member marked items defined inside the items from %extends list, all the way deep through the extendeds tree
  • LS_ARGUMENTS_DETAILED: title for section with full documentations for %arg marked items
  • LS_NOTES: title for section where all submembers from %notes member will be put under
All of these strings are in fact optional, but it is suggested to provide them all. Default values will have [D] prefix to mark that they are default placeholders and that it better be fixed.
The interface that readers of FDOM models generated at Logipard compilation stage are recommended to implement. In particular, it is implemented by ✖ logipard/lpgread-basic-json.js .
The type that incapsulates a FDOM item node in this reader's engine. May be a "null item" (don't mistake for null value). If the item is a reader resource that needs to be explicitly disposed, the implementation documentation must emphasize that, stipulate the object lifetime, and provide a disposal method.
Members
Name
Description
✖ .content
Read-only property. The item content (text, inline references, and whatever else the reader's backing model supports).
✖ .name
Read-only property, string. The item's full path name (with no namespace aliases)
✖ .shortName
Read-only property, string. The item's short name (last segment of the full path name).
✖ .tags
Read-only property. Collection of the item's tags.
✖ .members
Read-only property. Collection of the item's members.
✖ .isNull
Read-only property, bool. Check if item is empty (true) or not (false).
✖ .parent
Read-only property, ✖ <Item> . Return parent item (one this item is member to). For root item returns null (not null item).
✖ .isConditionTrue(lpqCtx, condSpec)
Check if the item satisfies a certain condition, must be done relative to a query context (in order to resolve condition and collection aliases).
Null item refers to an item with incorrect path or empty one. These are considered equivalent: if a name refers to item that is not explicitly present in the model, it is assumed to be an empty item, and, conversely, an empty item is considered a no-item when bundling into a collection (that is, not added even if explicitly declared to include).
Members (detailed)
Read-only property. The item content (text, inline references, and whatever else the reader's backing model supports).
Returns:
Array of items, each of which is either of:
  • string, for a text content
  • an object { ref: <Item>, text: string } (ref is ✖ <Item> ), for an inline ref to another item. Text is the ref alt display text, if not empty then it is suggested instead of the ref-d item's default title.
  • optionally, any other content fragment type specific to this FDOM reader and its backing model
Read-only property, string. The item's full path name (with no namespace aliases)
Read-only property, string. The item's short name (last segment of the full path name).
Read-only property. Collection of the item's tags.
Returns:
✖ <Collection> , collection of the node tags.
Read-only property. Collection of the item's members.
Returns:
✖ <Collection> , collection of the node members.
Read-only property, bool. Check if item is empty (true) or not (false).
Read-only property, ✖ <Item> . Return parent item (one this item is member to). For root item returns null (not null item).
Check if the item satisfies a certain condition, must be done relative to a query context (in order to resolve condition and collection aliases).
Arguments
Name
Description
✖ lpqCtx
✖ <QueryContext> , a query context.
✖ condSpec
✖ <Condition> , the condition.
Returns:
true if the item satisfies the condition, false otherwise.
Arguments (detailed)
✖ <QueryContext> , a query context.
✖ <Condition> , the condition.
Methods
.isConditionTrue(lpqCtx, condSpec)
An object containing a pre-compiled query that can be stored in a variable or other value slot. No useful properties or methods to expose per se, intended for use in contexts where user needs to supply a query. If it is a reader resource that needs to be explicitly disposed, the implementation documentation must emphasize that, stipulate the object lifetime, and provide a disposal method.
Handle for making queries that also holds auxiliary state (namespace aliases, named collection references, and ongoing query subject). If it is a reader resource that needs to be explicitly disposed, the implementation documentation must emphasize that, stipulate the object lifetime, and provide a disposal method.
Members
Name
Description
✖ .nameAlias(aliasName, item)
Set an item alias name (which should be a valid shortname), that can later be used as standalone item name or as starter for another item name within this ✖ <QueryContext> . Behaviour in case of already existing alias with the given name is implementation specific.
✖ .collectionAlias(aliasName, ...collectionSpecs)
Set a named collection alias that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). The collection is built up from collections corresponding to each element of the specs list. This alias is permament within the context, unlike query local alias ( ✖ Set local collection alias ["alias ..."] ).
✖ .queryAlias(aliasName, ...querySpecs)
Set a named query alias that can be used later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
✖ .conditionAlias(aliasName, condSpec)
Set a named condition alias that can be used later to reference the condition within this context ( ✖ <Condition> ).
✖ .item([baseItem ,] name)
Return item by given path name, either full or relative to the provided base item. The full item name's first segment shortname can be a name alias defined in this ✖ <QueryContext> .
✖ .collection(...collectionSpecs)
Returns a collection specified by a list of collection item specs. Each list item is a ✖ <CollectionSpec> .
✖ .with(...collectionSpecs)
Set current collection for the subsequent query (call to ✖ .query(...querySpecs) ). Collection is built up from collections corresponding to each element of the specs list. .with effectively initiates the query chain, but can be used in the middle of the chain as well to override the current collection after a certain step.
✖ .query(...querySpecs)
Perform a query, or a list of queries intepreted as a composite query, given the current collection specified by preceding ✖ .with(...collectionSpecs) or resulting from previous .query calls. Note that the resulting collection is not returned immediately, it becomes new current collection instead.
✖ .teardownCollection()
Finalize the query and return the result (the current collection at time of the call). The current collection itself is reset, so the next query must be re-initialized, starting over from ✖ .with(...collectionSpecs) .
✖ .currentCollectionAlias(aliasName)
Set a named collection alias for the current collection that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). Only is usable mid query (when the current collection is meaningful), otherwise it is an error. This is a local query alias, unlike a permament one ( ✖ Set local collection alias ["alias ..."] ).
✖ .compileQuery(...querySpecs)
Compile a query into a handle object usable later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
Members (detailed)
Set an item alias name (which should be a valid shortname), that can later be used as standalone item name or as starter for another item name within this ✖ <QueryContext> . Behaviour in case of already existing alias with the given name is implementation specific.
Arguments
Name
Description
✖ aliasName
Alias name, string.
✖ item
The item to alias. String (a full path name, probably including another alias) or ✖ <Item> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Arguments (detailed)
Alias name, string.
The item to alias. String (a full path name, probably including another alias) or ✖ <Item> .
Set a named collection alias that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). The collection is built up from collections corresponding to each element of the specs list. This alias is permament within the context, unlike query local alias ( ✖ Set local collection alias ["alias ..."] ).
Arguments
Name
Description
✖ collectionSpecs
Each list item is a ✖ <CollectionSpec> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Implementation-specific method to clear the alias may be provided.
Arguments (detailed)
Each list item is a ✖ <CollectionSpec> .
Set a named query alias that can be used later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
Arguments
Name
Description
✖ aliasName
String, the alias name
✖ querySpecs
Each list item is a ✖ <QuerySpec> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Implementation-specific method to clear the alias may be provided.
Arguments (detailed)
String, the alias name
Each list item is a ✖ <QuerySpec> .
Set a named condition alias that can be used later to reference the condition within this context ( ✖ <Condition> ).
Arguments
Name
Description
✖ aliasName
String, the alias name
✖ condSpec
Condition spec, a single ✖ <Condition> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Implementation-specific method to clear the alias may be provided.
Arguments (detailed)
String, the alias name
Condition spec, a single ✖ <Condition> .
Return item by given path name, either full or relative to the provided base item. The full item name's first segment shortname can be a name alias defined in this ✖ <QueryContext> .
Arguments
Name
Description
✖ baseItem
Optional. The base item to apply ✖ name path to. ✖ <Item> , string or array of strings.
✖ name
The path to item. String or array of strings. Can begin with a name alias defined in this ✖ <QueryContext> .
Returns:
The target item, as ✖ <Item> . Must always be non-null object - a non-existing item in the model is implicitly created as null-item.
Arguments (detailed)
Optional. The base item to apply ✖ name path to. ✖ <Item> , string or array of strings.
String is treated as full path name, array of strings is treated as full path given by list of shortname components. The path, whether given by string or array of components, can start with name alias defined in this ✖ <QueryContext> .
If the relative name is given, say some/path, and the baseItem provided has path base/item, then the resulting item is assumed by path base/item/some/path.
The path to item. String or array of strings. Can begin with a name alias defined in this ✖ <QueryContext> .
Returns a collection specified by a list of collection item specs. Each list item is a ✖ <CollectionSpec> .
Arguments
Name
Description
✖ collectionSpecs
Each list item is a ✖ <CollectionSpec> .
Returns:
The collection, as ✖ <Collection>
Each collection item spec is processed and appended to the result individually, regardless on logic of the other item specs, but in any case no ✖ <Item> will be contained in the result more than once.
Arguments (detailed)
Each list item is a ✖ <CollectionSpec> .
Set current collection for the subsequent query (call to ✖ .query(...querySpecs) ). Collection is built up from collections corresponding to each element of the specs list. .with effectively initiates the query chain, but can be used in the middle of the chain as well to override the current collection after a certain step.
Arguments
Name
Description
✖ collectionSpecs
Each list item is a ✖ <CollectionSpec> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Arguments (detailed)
Each list item is a ✖ <CollectionSpec> .
Perform a query, or a list of queries intepreted as a composite query, given the current collection specified by preceding ✖ .with(...collectionSpecs) or resulting from previous .query calls. Note that the resulting collection is not returned immediately, it becomes new current collection instead.
Arguments
Name
Description
✖ querySpecs
Each list item is a ✖ <QuerySpec> .
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Arguments (detailed)
Each list item is a ✖ <QuerySpec> .
Finalize the query and return the result (the current collection at time of the call). The current collection itself is reset, so the next query must be re-initialized, starting over from ✖ .with(...collectionSpecs) .
Returns:
Result, as ✖ <Collection>
Set a named collection alias for the current collection that can be used later to reference the collection within this context ( ✖ <CollectionSpec> ). Only is usable mid query (when the current collection is meaningful), otherwise it is an error. This is a local query alias, unlike a permament one ( ✖ Set local collection alias ["alias ..."] ).
Returns:
Self ( ✖ <QueryContext> ), allowing to chain more calls
Errors:
Throws an error if there is no current collection in the context.
Implementation-specific method to clear the alias may be provided.
Compile a query into a handle object usable later to reference the query within this context ( ✖ <QuerySpec> ). The list is interpreted as a composite query.
Arguments
Name
Description
✖ querySpecs
Each list item is a ✖ <QuerySpec> .
Returns:
The compiled query object ( ✖ <Query> ).
This method can be considered as "anonymous" version of ✖ .queryAlias(aliasName, ...querySpecs) for better code side use convenience and possibly optimization, as the various queries are typically quite diverse and numerous, and it may be not practical to have a named alias for each one.
Implementation-specific method to dispose the compiled query may be provided.
Arguments (detailed)
Each list item is a ✖ <QuerySpec> .
Methods
.nameAlias(aliasName, item)
.collectionAlias(aliasName, ...collectionSpecs)
.queryAlias(aliasName, ...querySpecs)
.conditionAlias(aliasName, condSpec)
.item([baseItem ,] name)
.collection(...collectionSpecs)
.with(...collectionSpecs)
.query(...querySpecs)
.teardownCollection()
.currentCollectionAlias(aliasName)
.compileQuery(...querySpecs)
The type that incapsulates a complete and readable collection of ✖ <Item> -s. It should be a JS iterable. Collection must never contain null (with ✖ .isNull = true) or duplicate items. If it is a reader resource that needs to be explicitly disposed, the implementation documentation must emphasize that, stipulate the object lifetime, and provide a disposal method.
Members
Name
Description
✖ .size
Read-only property, number. Size of the collection (how many items are in it).
✖ .contains(item)
Check if the collection contains the given item. Must be false for any null (with ✖ .isNull = true) items.
✖ [Symbol.iterator]
The collection must be a JS-enumerable object, delivering the contained ✖ <Item> 's in some order (for (var item of collection)). It is recommended for the implementation to keep items declared within the same source in the same order as they are in that source, but the user is not recommended to rely on this assumption.
Members (detailed)
Read-only property, number. Size of the collection (how many items are in it).
Check if the collection contains the given item. Must be false for any null (with ✖ .isNull = true) items.
Arguments
Name
Description
✖ item
✖ <Item> , item to check for presence in the collection.
Returns:
true if the item is contained in the collection, false otherwise
Arguments (detailed)
✖ <Item> , item to check for presence in the collection.
The collection must be a JS-enumerable object, delivering the contained ✖ <Item> 's in some order (for (var item of collection)). It is recommended for the implementation to keep items declared within the same source in the same order as they are in that source, but the user is not recommended to rely on this assumption.
Methods
.contains(item)
An element of a condition specification list. Corresponds to ✖ Condition specification concept in FDOM. Can be one of the following objects...
  • string: condition reference by alias
  • boolean: boolean constant type condition
  • { isAnyOf: <CollectionSpec> }: the isAnyOf type condition
  • { hasMembersNamed: <string | RegExp> }: the hasMembersNamed type condition, regexp can be given as JS RegExp (no flags should be used except for i) or as a regexp source string (assuming no regexp flags)
  • { hasMembersThat: <Condition> }: the hasMembersThat type condition
  • { hasAnyOfTags: <CollectionSpec> }: the hasAnyOfTags type condition
  • { hasAllOfTags: <CollectionSpec> }: the hasAllOfTags type condition
  • { hasParentThat: <Condition> }: the hasParentThat type condition
  • { named: <string | RegExp> }: the named type condition, regexp can be given as JS RegExp (no flags should be used except for i) or as a regexp source string (assuming no regexp flags)
  • { and: [ ...<Condition> ] }: the and type condition, the argument is array of <Condition> objects
  • { or: [ ...<Condition> ] }: the or type condition, the argument is array of <Condition> objects
  • { not: <Condition> }: the not type condition
An element of a collection specification list. Corresponds to ✖ Collection specification concept in FDOM. Can be one of the following objects...
  • ✖ <Item> : a directly specified single item
  • ✖ <Collection> : a directly specified collection, is unwrapped and appended flat
  • string: name of a single item, or of a collection alias if such named alias is set in the context (the collection alias lookup has preference over an item name)
  • array: of collection item specs - is processed like if it was unwrapped flat into the collectionSpecs list (arbitrary nesting is possible)
  • { union: [nestedCollectionSpecs] }: a set union of collections specified by the array of elements, each of which is also a collection spec item (arbitrary nesting is possible, but note that every item spec at union's list topmost level specifies operands for union operation, not a concatenation)
  • { intersect: [nestedCollectionSpecs] }: a set intersection of collections specified by the array of elements, each of which is also a collection spec item (arbitrary nesting is possible, but note that every item spec at intersect's list topmost level specifies operands for intersect operation, not a concatenation)
  • { subtract: [nestedCollectionSpecs] }: a set difference of collections specified by the array of elements (subtracting 2nd and on elements from 1st element), each of which is also a collection spec item (arbitrary nesting is possible, but note that every item spec at subtract's list topmost level specifies operands for subtract operation, not a concatenation)
An element of a basic query specification list. Corresponds to ✖ Basic queries concept in FDOM. Can be one of the following objects...
  • string: reference to an aliased query by the alias name given as string
  • ✖ <Query> : a pre-compiled query object
  • [ ...<QuerySpec> ]: array of query specs, a composite query where the components are applied in the listed order
  • { alias: string }: ✖ Set local collection alias ["alias ..."] , alias name to set is given as string
  • { with: <CollectionSpec> }: ✖ Replace current collection ["with ..."]
  • { membersThat: <Condition>, on?: <CollectionSpec>, recursive?: boolean }: ✖ Select members that satisfy condition ["membersThat ..."]
  • { tagsThat: <Condition>, on?: <CollectionSpec>, recursive?: boolean }: ✖  Select tags of the collection's elements that satisfy condition ["tagsThat ..."]
  • { inMembersThat: <Condition>, query: [ ...<QuerySpec> ], on?: <CollectionSpec>, recursive?: boolean }: ✖  Perform sub-query on members of the collection's elements that satisfy condition ["inMembersThat ..."]
  • { inTagsThat: <Condition>, query: [ ...<QuerySpec> ], on?: <CollectionSpec>, recursive?: boolean }: ✖  Perform sub-query on tags of the collection's elements that satisfy condition ["inTagsThat ..."]
  • { inItemsThat: <Condition>, query: [ ...<QuerySpec> ], on?: <CollectionSpec>, recursive?: boolean }: ✖  Perform sub-query on the collection's elements that satisfy condition ["inItemsThat ..."]
  • { subtractQuery: [ ...<QuerySpec> ], on?: <CollectionSpec> }: ✖  Subtract result of sub-query from current collection ["subtractQuery ..."]
  • { unionQuery: [ ...<QuerySpec> ], on?: <CollectionSpec> }: ✖ Union result of sub-query with current collection ["unionQuery ..."]
  • { intersectQuery: [ ...<QuerySpec> ], on?: <CollectionSpec> }: ✖  Intersect result of sub-query with current collection ["intersectQuery ..."]
  • { sideQuery: [ ...<QuerySpec> ], on?: <CollectionSpec> }: ✖  Perform sub-query with no effect on current collection ["sideQuery ..."]
It is typical for Logipard's configuration files to grow quite large, so pure JSON gets unconvenient and poorly maintainable. To address the shortcomings, Logipard uses its own JSON extension that we naturally name LPSON. It is designed to solve a number of scalability and maintainability issues:
  • possibility of C++-style comments (this enables to use LP documentation annotations in the LP config files!)
  • less fragile syntax with more optional visual clues to improve human readability and writability
  • modularity (capability to split an object into several files)
  • possibility of explicit charset specification
  • usage of values based on configurable context variables
  • better parsing errors diagnostics (the parser tries to detect as many errors as possible rather than stopping at first syntax error like traditional JSON parsers do, it gets useful for large and modular objects)
  • better debug options (of a sort)
  • backward JSON compatibility
Although internally used by Logipard, LPSON parser is also exposed for custom use: ✖ Custom usage
In terms of the code side use, LPSON is resolved to plain JSON-compatible value, so it doesn't require awareness of any extras compared to plain JSON.
An introduction into LPSON grammar.
The expression is a symbols string that resolves to a JSON-compatible value. In JSON, these are only limited to dictionaries, lists, and atomic constants: numbers, double-quoted strings, true, false, and null. All of these are possible in LPSON as well (with some extended capabilities for lists and strings), plus several more options:
  • vars: resolves to variables dictionary value
  • file(...): resolves to value parsed from the specified JSON/LPSON file
  • field access operator: value followed by .fieldName, or ."fieldName", or .(field-name-expr), resolves to value of the given field name of the preceding value
  • string interpolation operator: value followed by $, resolves to preceding string value where the ${var-name} placeholders are replaced with matching context var values
  • dictionary type implanting: value followed by dictionary ( { ... }), resolves to that dictionary with added "@type" key and the preceding value as value
  • multiple operators chained (e. g. vars.objectField1."objectField2".("objectField${THREE}" $).stringField { value: 123 }). They are evaluated left to right with same priority.
An expression can stand in all contexts where a value is required, and also for a key in the dictionary (provided it resolves to string)
Full expression is: file(name-value) or file(name-value, extra-vars-dictionary-value). It parses and resolves LPSON value from the given file, adding/replacing the supplied extra vars to context vars dictionary. The modified dictionary will only be in effect for the expressions ( ✖ 'vars': context vars dictionary ) in the child context inside the embedded file, current file's context vars are not affected.
Arguments
Name
Description
✖ name-value
a value that resolves to file name. Relative names are relative to the current file's directory (i. e., file("xxx.lpson") from inside yyy/zzz.lpson will refer to file yyy/xxx.lpson).
✖ extra-vars-dictionary-value
a value that resolves to a dictionary. Keys are names of the context vars to add/override in the child context, values are the values to set them to.
Example: { member: file("value-for-member.lpson", { VERSION: "1.0.0" }) }
The files with same effective set of context variables are cached during the parsing, so don't worry about performance when using file("same-file") multiple times.
Arguments (detailed)
a value that resolves to file name. Relative names are relative to the current file's directory (i. e., file("xxx.lpson") from inside yyy/zzz.lpson will refer to file yyy/xxx.lpson).
a value that resolves to a dictionary. Keys are names of the context vars to add/override in the child context, values are the values to set them to.
A string literal. Can be double-quoted ("abc", JSON-compatible), single-quoted ('abc'), or backtick-fenced (`...`abc`...`).
In quoted literals, the same backslash sequences as in JSON are allowed (\n, \", \\, \n, \r, \t, \uXXXX etc.). The non-matching quote type (' in "..." and " in '...') can stay unquoted. Line breaks (raw or escaped) are not allowed in quoted strings.
In fenced literals, the closing delimiter is exactly the same number of adjecent backticks as the opening one. All the characters within are verbatim, including line breaks (no escapes or trims), with a couple of convenience exceptions:
  • opening backticks can be last non-whitespace characters on line - in that case, the whitespaces and line break are trimmed. These backticks don't have to start the whole line.
  • closing backticks can be first non-whitespace characters on line - in that case, the line break and whitespaces are trimmed. These backticks don't have to end the line.
That is:
before, `a`, after
// is the same as
before, ``
a
        ``, after
A number literal. Can be a decimal number with optional minus and optional exponent like in JSON, e. g. 1, 2, -1.0e10... In LPSON, plus sign is also allowed (+1 etc.), and, additionally, integer hex numbers (case insensitive) are allowed, e. g. 0x1F, 0X100, -0x123
Boolean true constant (token is case-sensitive). Same as in JSON.
Boolean false constant (token is case-sensitive). Same as in JSON.
Null value constant (token is case-sensitive). Same as in JSON.
A value that resolves to dictionary of context variables, their names as keys, and the values (JSON-compatible) as assigned to the corresponding variables.
At the initial file root level, the dictionary contains the default set of variables, specifically for LP config these are:
  • LP_HOME: installation directory of the currently running Logipard pipeline executor, can be used to reference the built-ins
  • LP_PROJECT_ROOT: project root directory, use it to construct strings that are meant to be file names relative to project root (not counting file names in file(...) operator, there it is done automatically)
  • THISFILE: path to the current LPSON file, may be useful to refer random items relative to the file location (e. g. "${vars.THISFILE}/../item_in_the_same_dir.png" $)
(expression), or (expression1, expression2, ...) - the subexpression to be calculated at higher priority than the rest of operators chain. It is resolved to value of the parenthesized expression, or of the last expression in the parenthesized list.
Counterpart of JSON dictionary (hash): { "key1": value1, "key2": value2, ... }, but has some additional features:
Simplified keys
Double quotes in the key names can be omitted if the keys are valid identifiers. In LPSON, the characters in valid itentifiers are not just A-Z, a-z, 0-9 (except for as first character) and _ like in JS, but also +, -, *, / (except for //-s and /*-s that are treated as comment start), $, and =. Example:
{
 "jsonStyleKey": 0,
 LP-style-key: 1,
 /this-is+allowed=too*$: 2,
 jsonStyleKey: 3 // it is the same as "jsonStyleKey"
}
In case there are duplicate keys, the later ones replace silently the older ones.
Expression spread
Add the keys and values from value given by the provided expression, provided that it resolves to a dictionary. Similar to JS spreading operator inside an object. Example:
{
 a: 1,
 ...({ b: 2, c: 3 }),
 d: 4
}
Duplicate keys behavour is the same as if these object's contents were embedded at this place inline.
Expression keys
Keys can be expressions, provided that they resolve to a string. Example:
{
 (vars.KEY_NAME): "value"
}
@type prefixing
If a dictionary literal is prefixed with an expression, it is the same as having that expression added to the dictionary under "@type" key. Example:
"string" { value: "123" }
is the same as:
{ "@type": "string", value: "123" }
The expression value can be not just atomary values:
{ class: "even another dictionary" } { value: "typed value" }
is the same as:
{ "@type": { class: "even another dictionary" }, value: "typed value" }
or:
(expression) { value: "typed value" }
is the same as:
{ "@type": (expression), value: "typed value" }
Type prefixing only works on dictionary literals on right hand side. If is not allowed if the right hand side is an expression (even a parenthesized dictionary literal). In fact it is considered a postfix operator.
Comma after final entry
Similarly to JS and most other C-like syntaxes, LPSON allows to use comma after the last entry of the dictionary:
{
 is: "legit",
 legit: "too",
}
Counterpart of JSON list: [ value1, value2, ... ], but has some additional features:
Expression spread
Insert the values from value given by the provided expression, provided that it resolves to a list. Similar to JS spreading operator inside an array. Example:
[
 1,
 ...([2, 3]),
 4
}
Comma after final entry
Similarly to JS and most other C-like syntaxes, LPSON allows to use comma after the last entry of the list:
[
 "is",
 "legit",
]
Given the left hand side value is a dictionary, the operator resolves to value of its field (member) by the given key. Absence of such field is considered an error, but it is allowed to supply a default value for this case.
The key can be a double-quoted string, or an unquoted valid identifier, or a parenthesized expression that should resolve to string:
{ a: "value" }.a
{ a: "value" }."a"
{ a: "value" }.("a")
// all these expressions resolve to "value"
Default value can be provided by adding a (= expression) suffix after the key or key expression:
{ a: "value" }.a (= "default value") // resolved to "value"
{ b: "value" }.a (= "default value") // resolved to "default value", as .a is not defined
Undefined key is only scored for its explicit absence. If a key is set to null or any false value, the field is defined to have that value.
Given the left hand side value is a string, the operator interprets and resolves it as a template string that can contain the following fragments:
  • ${varName} - substitute value of context variable varName, given it resolves to string, number or boolean
  • \$ (\\$ in double-quoted string) - literal $
Example:
"Program version ${version}, and this is not a \\${placeholder}" $
// if vars.version is "1.0.0", then it resolves to same as "Program version 1.0.0, and this is not a ${placeholder}"
If the variable is not defined, or not one of the three allowed types, it is an error. It is possible however to supply a set of defaults for this particular evaluation by additional (= dictionary-expression) prefix:
// vars.a is "value-of-a", vars.b is not defined
"a is ${a}, b is ${b}" $(= { a: 1, b: 2, c: 3 })
// same as "a is value-of-a, b is 2"
Each value or spread expression can be prefixed with one or more annotations that alter its evaluation context or add some extra behaviour. Annotation format is <valid-LP-identifier expression-parameter>.
Given the expression paremeter resolves to a dictionary value, adds (or overrides existing) context variables with names matching the dictionary keys to the respective values. The replacement is done in child context which is only in effect for evaluation of the annotated value or spread expression.
Note that the variables are evaluated once before processing the main value and contain the resolved JSON-compatible values, not expressions to re-evaluate on each use.
Example:
<+vars { a: 10 }>
{
 innerA: "${a}",
 innerA1: <+vars { a: 11 }> "${a}",
 <+vars { a: 12 }>
 ...{ innerA3: vars.a, innerA4: "${a}" }
}
With several +vars annotations in a row, they all are applied in their order, later ones override earlier ones:
<+vars { a: 1 }>
<+vars { a: 2, b: 3 }>
"${a} ${b}" $ // "2 3"
Dumps to stdout the value that the given expression parameter resolves to, using the context variables in effect at time of trace annotation is encountered. Useful to check or debug some values at a questionable location.
Example:
<trace vars.a>
{
 innerA: "${a}"
}
If placed in one row with +vars annotations, the variables used are ones in effect after the last +vars before the trace:
<+vars { a: 10 }>
<trace vars.a> // 10
<+vars { a: 20 }>
vars.a // 20
Tokenization is done in two passes. First pass is performed on the file decoded in latin1 charset, in order to detect any CHARSET_COMMENT's. Second pass, which yields the actual token string, is done on the file decoded with charset determined from the 1st pass - one found in a CHARSET_COMMENT, or UTF-8 in absence thereof.
Token-level grammar is given in regexp notation:
WHITESPACE ::= \s+
CHARSET_COMMENT ::= //[^\S\r\n]*#charset[^\S\r\n]+([-A-Za-z0-9_]*).*
COMMENT ::= //.*
MULTILINE_COMMENT ::= /\*[\S\s].*?\*/
// whitespaces and comments are dropped from the token string
// '#charset' token is case sensitive, but the charset name itself isn't, and hyphens are ignored - that is,
// you can use #charset utf8, #charset UTF-8, or #charset Utf8-, etc.

SINGLE_QUOTE_STRING ::= '(?:\\.|[^'])*'
DOUBLE_QUOTE_STRING ::= "(?:\\.|[^"])*"
// similarly to plain JSON, quoted strings can not span multiple lines

FENCED_STRING ::= (?<fence>`+)(?:\s*?\n)?([\S\s]*?)(?:(?<=\n)[^\S\r\n]*)?\k<fence>
// fenced string is delimited by runs of backticks of same length (which can start from one backtick), and
// can span multiple lines
// all characters between the fences are taken verbatim, including spaces and newlines, except for trailing
// whitespaces on line with opening fence and/or leading whitespaces on line with closing fence, if the fences
// are, resp., last/first non-whitespace chars on their lines - such whitespace runs, including line feeds, are dropped.

NUMBER ::= [-+]?(?:\d+(?:\.\d+)?(?:[eE]\d+)?|0[Xx][0-9A-Fa-f]+)
// unlike in plain JSON, LPSON allows leading + and hexadecimal integer numbers

PUNCTUATION ::= \.\.\.|[\(\)\[\]\{\}<>.,:]
// the recognized LPSON punctuators are: '[' ']' '{' '}' '(' ')' '<' '>' ',' ':' '.' '...'

IDENTIFIER ::= (?![0-9])(?:[-+A-Za-z_$*=]|\/(?!\/))+
// in addition to digits, letters and underscore, LPSON allows identifiers to contain $, -, +, *, / (except for
// two consecutive /'s, which re treated as a comment start), and =
// the identifier must not start from digit, or from plus or minus followed by digit
The "starndard" grammar-based parser in LPSON only spans to basic structure, so it is called "level 1". Finer details ( ✖ "level 2" ) are addressed in grammarless manner.
The L1 grammar is as follows:
ANNOTATION ::= '<' NOISE '>'
SPREAD ::= '...' NOISE
SUBEXPR ::= '(' (NOISE ',')* NOISE? ')'
LIST.ITEM ::= ANNOTATION* (SPREAD | NOISE)
LIST ::= '[' (LIST.ITEM ',')* LIST.ITEM? ']'
KEY_VALUE ::= NOISE ':' NOISE
DICTIONARY.ITEM ::= ANNOTATION* (SPREAD | KEY_VALUE)
DICTIONARY ::= '{' (DICTIONARY.ITEM ',')* DICTIONARY.ITEM? '}'
ATOMIC_VALUE ::= NUMBER | STRING | IDENTIFIER
NOISE.ITEM ::= SUBEXPR | LIST | DICTIONARY | ATOMIC_VALUE | '.'
NOISE ::= NOISE.ITEM+
// NOISE is basically some expression that resolves to a JSON value, but its further structure is out of scope at L1

LPSON_FILE ::= ANNOTATION* NOISE
// the LPSON file contains exactly one, optionally annotated, NOISE symbol
LPSON file parser is available for use for your own purposes:
const { loadLpsonFile } = require('logipard/lpson');
...
var [parsedObject, errors] = await loadLpsonFile('path-to-file.lpson', { varA: "A" }); // the 2nd parameter is dictionary of vars
if (errors.length > 0) {
	console.log("There were parse errors:");
	console.dir(errors);
} else {
	console.log("Object parsed successfully, backward JSON serialization:", JSON.stringify(parsedObject));
}
Note that variable THISFILE is always overridden by parser.
The page generated by Logipard 1.0.0 using lpgwrite-example + lpgwrite-example-render-html generator