wiki:Developer/6aFuture/Meehua

Mēhua Design Document

(Spelled “Meehua” with academic macrons [1])

Generic Architecture

Generic reporting architecture for Mehua

The key difference of Meehua as compared to OML is in the Proccesing Points (PP). While OML filtering can only happen at the source, and the results be sent to a terminal Collection Point (CP, the oml2-server), Processing Points (PP) allow to tap into existing Measurement Streams (MS), and generate new streams. Another key difference is in the availability of providing control feedback, either to the reporting chain, or to the system itself, based on the output of some processing.

An MS is a series of subsequent tuples following a given schema. Several MSs, coming from the same or different Injection Points (IP), can follow the same schema (e.g., a packet capture tool running on different nodes, or two TCP streams from different sources to a single Iperf instance).

A snapshot in time of an MS can be seen as an SQL table, and we can envision running queries on it.

Measurement Points, Schemata and Measurement Streams

Each MP outputs samples into their associated MS following a given, static schema. For each new stream, the sender generates a unique identifier.

A schema is defined as a named tuple of elements. An element has a name, type, and optional unit. Both schema and element also have a storage-dependent name, which can be used as, e.g., valid table and field names, for database backends, while retaining the ability map to and from human-readable names.

Schemas are instantiated as tables, with at least a stream identifier and a timestamp, alongside their primary key. Each row in such table is a sample tuple from a stream corresponding to that schema.

OM question: Does the schema definition in the Schema table include ID, StreamID and TS as explicit elements?

Schemas are defined when receiving new streams. Schema and element identifiers are local to the receiver (but stream IDs aren't). The mapping between StreamID and table is done when the schema is defined.

OM question: How do do this mapping? By human-readable names? (Probably) Or by actual schema, regardless of the name? (This might confuse people, e.g. in/out bytes would have the same schema, but not the same semantics).

Metadata

Schema

By default, one schema is known: that of Metadata streams. Each metadata sample to a specific stream (maybe even themselves!), and optionally a specific element of that stream, and provide a Key-Type-Value information. Metadata can include: domain, node-id, application, command line parameters, etc.

OM comment: Should the Metadata schema be declared as all other schemata, only when the storage is initialised? On the one hand, it would support discovery as for the other schemata but, on the other hand, it also implies information duplication (but also easy migration and no assumption as to what is available). I think it makes sense to declare it as any other schemata.

OM question: What is the "span" of metadata? It would make sense to have it last until it is replaced by a later (based on timestamp) sample with the same StreamID/ElementID. This might be tricky to capture with SQL statements (SELECT s.field, m.precision FROM stream s, metadata m WHERE s.ts <= m.ts ...?). Or could this might be left at the application description?

OM comment: If metadata is a separate stream, then we have both its streamID, and that of the stream it refers to.

Relationship to other streams, and propagation

One metadata stream might cover several data streams (e.g. one application with multiple MPs).

Metadata are separate streams, they might or might not be propagated alongside the stream(s) they refer to. It might however be a good idea when setting up the reporting pipe to do so. In any case, forwarding of the relevant metadata subset is at the discretion of the PP along the way.

API

The basic API (which OML will also implement) should be provided with a single function:

int 
omlc_inject_metadata(
  const OmlMP *mp,         /* Measurement point to which the metadata is related (can be NULL) */
  const char *key,         /* Attribute described */
  const OmlValueU *value,  /* Value of that attribute */
  OmlValueT type,          /* Type of that value */
  const char *fname);      /* Optional field to which that metadata relates */

The mp would be used to set the right StreamID for the tuple created in the metadata stream, while the fname would be passed to a new internal function to determine the field index in the schema (int fname2idx(const OmlMP *mp, const char *fname)), and can be NULL (i.e., metadata referring to the whole MP).

Example

Taking the example of a GPS location schema, and two streams providing data following this schema, we would end with the following layout in the storage backend.

Schema table:

ID Name TableName
mdID Metadata Table Metadata
gpsID Example GPS ExampleGPS

Element table:

ID SchemaID Name Type Unit ColumnName
elID1 mdID AboutStreamID int AboutStreamID
elID2 mdID ElementID int ElementID
elID3 mdID Key string Key
elID4 mdID Type type Type
elID5 mdID Value string Value
elID6 gpsID Longitude double Longitude
elID7 gpsID Latitude double Latitude
elID8 gpsID Elevation double Elevation

Stream:

ID SchemaID
sID1 gpsID
sID2 gpsID
sID3 mdID
sID4 mdID

ExampleGPS:

ID StreamID TS Longitude Latitude Elevation
x1 sID1 t1 long11 lat11 el11
x2 sID2 t2 long21 lat21 el21
x3 sID1 t3 long12 lat12 el12
x4 sID2 t4 long22 lat22 el22
x5 sID1 t5 long13 lat11 el11

And some Metadata:

ID StreamID AboutStreamID TS ElementID Key Type Value
y1 sID3 sID1 t1 sender string app@…:32489
y2 sID4 sID2 t2 sender string app@…:32190
y3 sID4 sID2 t6 fix int 0
y4 sID3 sID1 t7 elID6 noise double no11

Processing Point

The core of a processing point is made of three subsequent functions: an s2t function converting MSs to tables, which periodically triggers the filtering function proper, f, running the query on the table(s), and a final t2s function re-serialising the output as a new MS.

OM question: Do we allow filtering across MSs from difference domains? Can a PP create a stream for a different domain (probably).

OM question: Can different applications generate the same schema? (Probably)

MSs (or subsets of their columns) are aggregated into tables. Every so often (number of elements or time window), a filter is run on these joined tables, creating new tuples which are then reconverted into an output MS.

A unique identifier is generated for each stream, which can be used to select specifically data from a given stream within a pool of several matching the same schema.

OM comment: We could just use their sender-id/domain, but this is not specific enough to uniquely identify them, perhaps the lib should give a UUID to each stream when they are created; this would however create the problem of not knowing streams ID in advance, or after a restart of the sender)

OM comment: It seems better to me to group all tuples from MSs with the same schema into the same table, and add columns identifying their source, to support GROUP BY constructs if needed, or simple aggregation otherwise.

Source streams, and their metadata are listed as metadata of the created stream for provenance management.

OM comment: I'm not sure it makes sense to only expose the columns manipulated by a given filter, as this would either require the filtered data subsets separately in ad hoc tables, or doing the column-filtering just before providing the data to the filter, which would incur an additional SELECT-like construct, which the filter could very well do on its own.

OM comment: We might need a specific OML_TIMESTAMP type which similar to the OML_KEY_XXX would allow to carry semantic about the use of the field and allow automatic filtering, particularly when we have period-based filtering, as the time when and pace at which the PP receives measurements is not guaranteed to be correlated with those at which they have been created (e.g. with a proxy on the way). PPs should probably add a timestamp by default. Also, I don't think we should rely on protocol-level timestamps (oml_ts_*).

Control Language

  1. Create new stream (àla StreamSQL [2]) with parameters (as metadata)
  2. Send parameter value as a specific schema to control PP (as data stream)

The example above, trigerred either when the number of input samples has reached a threshold or every given time period (in seconds), and sending data streams to both a configurable next hop and a backup CP (which could be another PP) could be done as follows.

CREATE Sx (ws:int, period:double:s, collect:string) \
SELECT A, hist(B), avg(C) from SchemaX \
GROUP BY Sid \
WINDOW $ws OR \
PERIOD $period \
COLLECT $collect AND \
COLLECT tcp:BACKUP:3003

With no additional filtering specification, a first() filter is applied.

The (ws:int, period:double, collect:string) part declares parameters configurable at run time by, e.g., sending a datastream with one tuple to the processing point with schema Sx_params.

Reporting Chain Instantiation

It is voluntarily left out of the scope of Meehua how the chain of PPs is instanciated and controlled, as this depends on the use cases. This can, for example, be left to a Resource Proxy in OMF.

However, some corner cases are not clear-cut as to whether they belong to the PP (and its control language) or the control framework. For example, for sample-based measurement, it might be needed to count only those samples which match a specific criterion (WHERE clause) before triggering a query based on this criterion, to get the desired number of matching samples (1). Another example is in case of a PP only interested in a limited subset of an MS (e.g., to save capacity) in which case filtering should be done at the upstream PP generating the stream, rather than at the downstream one (2). Some syntactic sugar for these purposes can be introduced, such as extending the FROM clause to specify filtering criteria and or subsets of an MS deemed relevant.

OM/MO questions: The question lies in the fact that such an upstream communication is not currently envisionned and would probably require a new different protocol, as well a create scalabitily issues. How do we do this? Should the control framework extend the filtering language to properly instanciate upstream PPs (e.g., ... FROM stream(b<1))?

This is also a concern for authorisation and authentication. The former should probably be part of the PP, to support the latter for the control framework.

API

The Meehua API should be conceptually compatible with OML's (i.e., we should be able to write an oml-comp library in a few line of code, to support easy migration).

However, it should be reentrant and thread-safe. Particularly, it should manipulate an initial context, to which connections and MP definitions, as well as buffers, would be attached. Also, the parametrisation of the library should be more modular than omlc_init(), to avoid having to create fake command line arguments, though helpers functions to parse those should still be available.

meehua_context ctx = meehua_init(app_name);
meehua_config_argv(ctx, &argc, &argv); //also does meehua_config_env, and internally calls meehua_set_[nodeid,domain,...]
meehua_start(ctx); // probably cannot set nodeid and others after that, but can declare new MPs
...
meehua_terminate(ctx); // does the same as the two following commands
//meehua_close(ctx);
//meehua_free(ctx);

High Level Use Cases

What do people want of Meehua?

OM comment This section is still very young and needs refinement.

Stakeholders

Platform provider Experimenter/User

Requirements

  • Platform provider
    • Monitor the platform
    • Give access to a subset of the information to the experimenter
      • Limited scoped
      • Aggregated data
  • Experimenter/Use
    • Collect contextual data (relevant platforms health)
    • Get own data
    • Keep experimental data to themselves

References

  • architecture.png - Generic reporting architecture for M?hua (4.54 KB) Olivier Mehani, 14/01/2013 05:56 PM
  • pp.png (18 KB) Olivier Mehani, 15/01/2013 04:16 PM
  • schemata.png (21.2 KB) Olivier Mehani, 16/01/2013 06:45 PM
  • ExampleTableGPS.png (9.17 KB) Olivier Mehani, 16/01/2013 06:45 PM
  • meehua_design_IMG_20130116_112220.jpg (1 MB) Olivier Mehani, 04/02/2013 05:22 PM
  • meehua_pp_control_IMG_20130130_123507.jpeg (68.9 KB) Olivier Mehani, 04/02/2013 05:22 PM
  • meehua_pp_control_IMG_20130130_123513.jpeg (64.7 KB) Olivier Mehani, 04/02/2013 05:22 PM
Last modified 4 years ago Last modified on Jul 15, 2020, 4:37:11 PM
Note: See TracWiki for help on using the wiki.