Reference architecture

From RoSta
Jump to: navigation, search
Specification of a reference architecture for mobile manipulation & service robots.


News and Meetings

The deliverables (for all WPs) are available (final versions, possibly still subject to updates) at the Main_Page

The topic "Reference Architecture" has an overlap with the topic "Middleware", so a lot of information can be found there!

The papers and slides from the workshop at IROS2007 are now available.

There are some minutes from the intermediate expert meeting in Leuven.


Since there is no clear standardized or agreed definition of what a reference architecture is, and we should not get stuck on formalities, let's focus on the purposes starting out with one of the most representative descriptions available:

The reference architecture part of RoSta is about paving the way for an integrated effort towards technichal architecture for mobile manipulation. The approach is along the following lines:

  • The application and system domain is robots and mobile manipulation, with the robots and other devices working robustly in unknown and unstructured environments, or in structured and well known environments with challenging performance requirements.
  • Finding first principles (invariants and core requirements) that applies to the domain, typically extracted or experienced from successful or failing systems/implementations.
  • Formation of expert groups and arrangement of expert meetings, initially tightly together with the middleware part (that typically is developed with respect to a certain architecture, and hence the evaluation of current architectures go together with the evaluation of middleware), and together with the ontology part that forms definitions for future reusable specifications and implementations.
  • Finding means of [bootstrapping] future architecture definition efforts (including their invariants and meta-level descriptions) such that resulting methods and reference architecture instances are readily applicable without an initial steep learning curve.

To that end and to keep this quite abstract topic concrete, the following applies:

  1. There is an aim of developing conceptual prototypes for proposed mechanisms (but implementations do not prescribe any "the architecture").
  2. The future or aim is not a standard reference architecture that is negotiated via some committee based on some existing architectures (but to find practically useful agreements for future efforts).

See the middleware page for a list of current architectures, evaluation criteria, and the (so far shared) mailing list.

Preliminaries - from proposal and initial work plan

The purposes of an reference architecture include means to:

  • Cope with complexity;
  • Communicate system design to developers.
  • Put features and limitations in to a context from a usage point of view.
  • Technically support parameterized/foreseen types of changes.
  • Embody domain knowledge, such as the combination of mobility and manipulation.

The current limitations/challenges of existing reference architectures include:

  1. Standards exist but are too monolithic and complex ; too much extra engineering required utilizing standards so non-standard solutions give a shorter time to market.
  2. Definitions are available but not in a directly accessible form ; they require substantial fees to some consortium, and/or definitions are not directly useful in typical engineering environments.
  3. Data types are described in standards but not self-contained in the components and connections; definitions are restricted to well known cases, and there is a versioning problem in keeping the standards and the implemented systems in sync with each other.
  4. The meaning of types and semantics is limited to specific standards; integration is hampered since software for translating data between systems need to be hand-coded.
  5. Communication standards impose the use of special hardware , even when not strictly needed; most communication standards prevent use of low-cost hardware such as consumer electronics, with increased product cost and reduced market opportunities.
  6. The fundamentally different needs for connection-based communication and state-based communication are not reflected in many of the current standards; the separation of periodic or event-based communication and state-based messaging (as in standards like XIRP, TTP, and FlexRay) results in deficient support for middleware development.
  7. Embedded resource-constraints such as communication bandwidth or packet-sizes, the time and memory needed for high-level interconnections, etc., are not reflected well enough. This limits the options for down-scaling and too often requires ad-hoc add-ons (hard-coding, extra proxies, etc.).
  8. Standards for safe systems/robots do not sufficiently well separate between safety critical but ‘stoppable’ robots and safety critical robots that are also mission critical; trading of different forms of dependability versus cost is not reflected in implementation standards or practices, with the effect that future safe robot will/would be unnecessarily expensive.
  9. Safety standards for human-robot collaboration exist only in the field of industrial robots and cannot be directly transferred to service robot applications. This seriously hampers the introduction of mobile manipulators and service robots in most practical applications.
  10. Standards for Plug-and-Play and distributed components (such as the UPnP), do not support high-level real-time services; distributed (web or Intranet) services lack the properties needed for scalable robotic systems.

The ongoing ToDos (according to plan) are investigations of:

  • Review of architectures and compilations of lessons-learned in to applicable guidelines.
  • Definition of architectural principles and means of descriptions using ontology-connected terms.
  • Conclusions from first year expressed in terms of technology platforms.
  • Define points of variations for upcoming changes.
  • Define invariants to hold during lifetime to ensure dependability, safety and predictability.
  • Put implementation techniques and mechanisms in to context.
  • Promote extra-functional properties such as testability and performance.
  • Tools to maintain well-defined meanings of objects and their relations, based on ontology.

Open issues - anyone, please mail your competent input

Since RoSta is a Coordinated Action (not actually funding real research), the following fundamental open issues are listed for the purpose of obtaining solutions from the community, or, if solutions are missing and the problem challenging, to get input (descriptions or suitable partners) for future research proposals/calls.

  1. What is the list of inherent technology dependencies to be kept in mind; what are the architectural requirements that are invariant (with respect to any instance of an architecture in our domain)? The initial items in the following list are given as examples to explain this question; please provide your insight about other dependencies:
    A real-time component or architectural solution can be used n a non-real-time scenario, while a non-real-time components may need total re-engineering to be useful in a real-time system. Still, e.g. using redundancy and supervisory control, it might be possible to use non-real-time solutions even in real-time control (at least in non-mission-critical systems), but clearly from an engineering point of view some solutions/components better be real-time (and concurrency) suitable from their basic design and implementation. What components are to be real-time capable (how and when is that decided during system development, and how is the decision stored and maintained?), how are real-time methods maintained with respect to other methods and synchronization of shared data, etc.?
    A single insecure (locally modified version of a) component can be enough to break the security of an entire system. On the other hand, as a matter of separation of concerns, security is not an issue during most parts of algorithm and system development, which implies there must be modularization concepts that can isolate errors and effects of undesired data changes. In any case, information security is not an add-on; it needs to be worked out from an architectural point of view.
  2. For both descriptions and requirements of systems and architectures, it is suggested that the most common classification into funtional and non-functional requirements is refined. The non-functional requirements that (e.g. on a component level) not only have to do with the quality of service but is required for (e.g. dependable) operation, is better denoted extra-functional requirements. There are reasons to believe that unawareness around the implication of this issue is a reason for less robust mobile manipulation today, with obvious examples being real-time guarantees and management of implicit resource constraints of computing platforms. This is in line with ongoing research within the embedded systems area, as indicated in "Current results" in FP6 ARTIST2 work and in industrial control (fully coherent definitions have not yet been found), but could there be an agreement on the suggested revised classification?
  3. To actually engineer safe (e.g. robot not accidentally harming humans) systems, the issue of safety need special attention. First we must assume that the previous items in this list have been worked out. That might involve strict usage of either completely type-safe (in short safe) programming languages or hardware supported modularization such as extensive use of memory protection together with suitable APIs. Note that we want to support both fail-safe systems containing low-cost COTS components and, when needed, dependable control of life-critical systems (immediate stop not acceptable so fault tolerance is needed). Future robot (hardware or software) robot components should (if possible) be useful in both situations (but with no significant extra cost for the safe low-cost case).
  4. It is claimed that when software components are composed according to any of the existing component/middleware frameworks, there is nothing like the the principle of superposition that holds in the case of real-time (or perhaps even resource-constrained) systems. Within electronics, signals and mechanical design that principle greatly facilitates efficient engineering, and for concurrent (non-real-time) software the SMARTSOFT architecture provides useful solutions, but for real-time software there are no known solutions. Or are there some new results from the embedded systems community?
  5. Architecture description languages are useful for the formal description of reference architectures, but it is not clear how to establish an ADL that is suitable with respect to all other issues here, plus the need for applying external tools and model checkers. Also ADLs need to support definition of anticipated points of variation, and they need be suitable for system development by 'ordinary' engineers (integrated with standard software tools). More on this to come but we are seeking researchers with ADL competence.

As usual "The devil is in the details" so all the eyes of the community are needed for a close look at these issues.....

Selected results from ongoing further developments

Expert meetings and discussions have resulted in the following suggested principles for future reference architecture actions:

  1. Observing that the last 30 years of standardization efforts in reference architectures for robot control has not resulted in any wide-spread standard, and even aims of standardization are confusing since developers/users talk about standards on very different levels, there is no hope for yet another traditional attempt. Instead the confusion between different attempts ans views (or the involved interfaces) of reference architectures (and their meta-levels) need to be treated by some kind of separation of concerns, which we propose to be divided into four levels:
    1. Ontology representation, defining in words, independently of any mathematical representation, the terminology and the meaning (semantics) of the relevant objects in the scope of the standard, and of the natural operations on those objects.
    2. Mathematical representation, providing data structures (coordinate representations) for the above mentioned objects, as well as the API (Application Programming Interface) for the natural operations on the physical objects.
    3. Computer representation, defining how coordinates are represented in computer-readable form.
    4. Native hardware representation, defining the mapping from computer-readable variables into bit representations in the hardware.
    where each of these levels have to do with how to represent information in terms of data and algorithms. A data representation on level 3 can be in terms of [ASN!] (for the telecom domain) or AMBER(in the molecular dynamics domain). For algorithms on level 2 we have MathML, but what are the other examples and how should it all look in the robotics domain?
  2. Task/mission/manipulation specification: Mobile manipulation based on external sensing (sensing principles not known in advance by built-in motion control) has been identified as a key topic (including key challenges in terms of real-time, user understanding, handling real-world variations, etc.) for any future successful architecture. A detailed example-based study is being carried out during August and September.
  3. The two previous items both call for automatically generated transformation of information based on high-level (typically ontology based). In an adjacent project it is prototyped how a compiler toolkit with declarative power can be used to facilitate that.

The key upcoming event is the IROS2007 workshop, where we hope to discuss these and many other issues......


Currently expert groups are shared with WP1 and WP3, but with additional smaller meetings and studies for specific topics for your college paper reference.

See also
Personal tools