A Real-Time Web of Things Framework with ... - Semantic Scholar

0 downloads 0 Views 6MB Size Report
Sep 28, 2016 - concept of Web of Things (WoT), many infrastructures have been proposed ... (WoT). In the WoT, each physical object is represented as a Web ...
sensors Article

A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices Shuai Zhao 1, *, Le Yu 2 and Bo Cheng 1 1 2

*

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China; [email protected] China Mobile Information Security Center, Beijing 100033, China; [email protected] Correspondence: [email protected]; Tel.: +86-10-6119-8022

Academic Editors: Massimo Poncino and Ka Lok Man Received: 7 July 2016; Accepted: 23 September 2016; Published: 28 September 2016

Abstract: With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations. Keywords: Web of Things; devices access; real-time interaction; resource platform; semantic model

1. Introduction With the development of the Internet of Things (IoT), resources and applications on top of it have emerged on a large scale. However, most efforts have focused on single applications and “silo” solutions in which devices and applications are tightly coupled. These solutions are characterized by “proprietary devices for one specific application”. Moreover, the market is currently quite fragmented. Each industry vertical has developed its own technical solutions without much regard for reuse and commonality [1–3]. Due to the lack of suitable infrastructures and the limited understanding of the two worlds (cyberspace and physically embedded), developers have to bridge this gap “manually” and have to become experts in both worlds. The development of IoT applications is thus cumbersome and inefficient [2,4]. In addition, the multitude of legacy sensor devices will play an important role in the IoT. Consequently, infrastructures are needed to connect sensors to the Internet and publish their output in well-understood, machine-processible formats on the Web thus making them accessible and usable at large scale under controlled access [3]. They should open up or break the current application silos and move to a horizontal application mode [5]. The success of the Web has encouraged researchers to consider incorporating real world objects into the World Wide Web, namely the Web of Things (WoT). In the WoT, each physical object is represented as a Web resource, which can be invoked through a Uniform Resource Identifier (URI) using Web APIs. Based on the WoT concept, many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of flexible and fine-grained control of data and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use.

Sensors 2016, 16, 1596; doi:10.3390/s16101596

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1596

2 of 21

To address these issues, this paper proposes a multi-level WoT framework that includes a public platform (PP) and a local platform (LP). The PP enables applications to reuse and share cross-domain resources, while the LP overcomes the technology heterogeneity within an application domain, and it provides a real-time resource platform for applications within the domain. Specifically, this paper makes the following contributions: (1)

(2)

(3)

Unified device access (UDA) is implemented based on the Open Service Gateway Initiative (OSGi) and dependency-inversion pattern. It shields the heterogeneity of devices, and then exposes them in a uniform resource way. A multilevel and multidimensional (MM) model is proposed to describe the resources and information of the framework. It decouples the upper applications from the underneath devices. Based on it, sensing data can be promoted and interpreted to context information. Besides, with the help of data manipulation language (DML), the opening granularity and the aggregation method of sensing data can be flexibly constrained. The proposed multi-level framework enables the customizable opening and sharing of user’s resources, meanwhile, ensures the real-time of their local applications. Its practicability is validated by actual systems and experimental evaluations.

The remainder of the paper is organized as follows: Section 2 reviews related work and challenges. Section 3 presents the overview of the proposed multi-level framework. Sections 4–6 describe the key technologies of the framework. Section 7 demonstrates the practicability of the proposed framework by case study and experimental evaluations. Section 8 concludes the paper and discusses our future work. 2. Related Work and Challenges Based on the concept of WoT, many commercial platforms have been proposed, such as Cosm [6], SenseWeb [7], GSN [8], and Yeelink [9], etc. These infrastructures act as data platforms, which manage millions of data points from thousands of individuals and companies around the world. They enable people to share, discover and monitor sensor and environmental data from objects that are connected to the Web [2,10,11]. However, these existing efforts are still far from forming an ecosystem. They lack commercial and practical applications; most resources are provided by enthusiasts; the applications on top of them are still prototype or “toy level”. Apart from these commercial platforms, multiple research projects (such as PECES, SemsorGrid4Env, SENSEI, and IoT-A), aiming to propose solutions for the integration of the physical world with the cyberspace, have been launched. The PECES architecture [12] provides a software layer to enable the seamless cooperation of embedded devices across various smart spaces on a global scale. It enables dynamic group-based communication between PECES applications by utilizing contextual information based on a flexible context ontology. Entities are encapsulated as contextual information, a model abstraction which can include spatial elements, personal elements and devices and their profiles. The PECES Registry implements a Yellow Pages directory service. Any service matching that description may be returned by the registry. The SemSorGrid4Env architecture [13] provides support for the discovery and use of sensor-based, streaming and static data sources. The architecture may be applied to almost any type of real world entity, although it has been used mainly with entities related to natural phenomena. The types of resources considered are: sensor networks, off-the-shelf mote-based networks or ad-hoc sensors, and streaming data. These resources are made available through a number of data-focused services (resource endpoints), which are based on the Web Services Data Access and Integration (WS-DAI) specification for data access and integration. The SENSEI architecture [14] aims at integrating geo-graphically dispersed and internet interconnected heterogeneous Wireless Sensor and Actuator Networks (WSAN) systems into a homogeneous fabric for real world information and interaction. In the SENSEI architecture, each real world resource is described by a uniform resource description, providing basic and semantically expressed advanced operations of a resource, describing its capabilities and REP information. On top of this unifying

Sensors 2016, 16, 1596

3 of 21

framework SENSEI builds a context framework, with a 3 layer information model. One of the key support services is a rendezvous mechanism that allows resource users to discover and query resources that fulfill their interaction requirements. The IoT-A project [15,16] extends the concepts developed in SENSEI further to provide a unified architecture for IoT. It aims at the creation of a common architectural framework for making a diversity of real world information sources such as wireless sensor networks accessible on a Future Internet. Although lots of significant works have been proposed, some challenges which hinder the widespread use and further development of IoT still exist: (1)

(2)

(3)

(4)

Real-time and reliability. IoT application scenarios are time-critical and reliability-critical [5], the premise of resource sharing by vendors or developers is ensuring the real-time and reliability of their own applications. But these platforms cannot guarantee this premise. The delay of data uploading and getting is uncontrollable. If commercial users open their resources and develop applications on top of them, they have to bear two times network delay (data from devices to remote platform and from remote platform to applications). Customizable Openness. Users need flexible and fine-grained control on how much information will be published (e.g., direct access, regular summary); how their sensor data are shared (e.g., read-only, time-delayed) and with whom (owner-only, specific individuals) [4]. But existing platforms lack the mechanism for users to control their data. For example, Cosm only provides two options (private and public), if a data stream is set to public, all data of it is exposed to anyone [1]. Legacy devices. These platforms focus on the solution of accessing (plugging) new devices, some of them are only able to access their customized devices. They lack the explicit solution for the integration of legacy devices which are important but quite fragmented and heterogeneous. A unified solution is required to integrate heterogeneous devices and expose their capabilities in uniform interfaces. Resource and information model. Resource and information model is the foundation of essential platform services, such as resource selection and data interpretation [2,16]. Existing commercial platforms (such Cosm, SenseWeb) lack of such model, their data lie on the raw data or observation data. However, users are interested in real-world entities and their high-level states. Although research projects (e.g., SENSEI, IoT-A) have proposed meaningful models for modeling resource, entity and their mapping relationship, these models tend to mix resources’ multiple feature dimensions to construct an integrated ontology hierarchy. This dimension-mixed way is difficult to build a well-defined resource hierarchy, and may not be able to select resource and establish resource-entity mapping automatically.

3. Framework Overview The premise of resource sharing is ensuring the real-time and reliability of resource providers’ applications. Although existing platforms like Cosm and io.bridge have proposed several exchange modes (e.g., MQTT and Pawl) to improve their interaction efficiency, the real-time and reliability still cannot be guaranteed. The data exchange delay is uncontrollable and unpredictable, as it depends on network conditions. If commercial users open their resources and develop applications on top of them, they have to bear twice the network delay. Moreover, these existing platforms limit the fastest frequency of data exchange. If users make more frequently request than it allows, then none of their requests will receive a response. For example, the Yeelink platform sets the minimum time interval between two Hypertext Transfer Protocol (HTTP) requests regarding the same data as 10 s. If a user requests the data too frequently, the platform will not answer his requests and return a “403 error” [10]. As shown in Figure 1, we propose a multi-level framework that provides resource openness, while avoiding the unnecessary delays and instability caused by such openness. It consists of a public platform (PP) and a local platform (LP). The PP enables users to share sensor data from objects that are connected to the Internet. It provides lightweight interfaces to facilitate the composition of new

Sensors 2016, 16, 1596

4 of 21

services that incorporate devices from different domains. Applications can interact with each other, Sensors 2016, 16, 1596 4 of 21 and their data and services can be integrated with resources in other systems. Any terminals that support the Internet can access these resources rather than customized clients. Moreover, it reduces support the Internet can access these resources rather than customized clients. Moreover, it reduces costs costs by reusing common resources (e.g., andcommunication communication resources SMS). by reusing common resources (e.g.,storage storageresources resources and resources likelike SMS).

Figure 1. Framework Overview.

Figure 1. Framework Overview.

Many IoT applications are time-critical and reliability-critical [5], such as a coal mine monitor

Many IoT are time-critical and reliability-critical as a coal mine monitor systems forapplications monitoring harmful gases. Details of this system will[5], be such presented in Section 7.1. Moreover, even within a single organization orthis domain, thewill number of heterogeneous solutions can systems for monitoring harmful gases. Details of system be presented in Section 7.1. Moreover, large.aThese solutions providedorbydomain, different the hardware or software manufacturers are isolated even be within single organization number of heterogeneous solutions canfrom be large. each other. The resources and data are locked in each system, and cannot be shared and reused. Forother. These solutions provided by different hardware or software manufacturers are isolated from each example, in the coal mine scenario discussed in Section 7.1, the monitoring system is isolated from The resources and data are locked in each system, and cannot be shared and reused. For example, the reporting system because they were developed by different manufacturers. Hence, users have to in the coal mine scenario discussed in Section 7.1, the monitoring system is isolated from the reporting copy the data from the monitoring system, perform calculations, and fill in the reports manually. system because they were different manufacturers. Hence, users have to copy the data Thus, in addition to thedeveloped PP which by opens resources, the LPs, which provide real-time and reliable from the monitoring system, perform calculations, and fill in the reports manually. Thus, in addition services for applications, are necessary. The LP also acts as an infrastructure for inner-domain to the PPresources which opens LPs, which provide real-time andinreliable services sharingresources, and reuse.the It maintains the extensibility of systems the domain, and for newapplications, devices are necessary. The LP acts asintegrated an infrastructure for inner-domain resourcescan sharing and reuse. and applications canalso be easily into it. Meanwhile, existing applications be extended agilely to reuse devices and interact with other applications. Therefore, the LP is very useful It maintains the extensibility of systems in the domain, and new devices and applications for can be scenarios even if theyapplications do not want can to open their resources. Thetoframework easilycommercial integratedapplication into it. Meanwhile, existing be extended agilely reuse devices proposeswith three other approaches for accessing (plugging) new or legacy devices, as demonstrated in Figure 1: and interact applications. Therefore, the LP is very useful for commercial application scenarios evenindividual if they dousers not want to open resources. The framework threetoapproaches (1) The that just want their to access their devices in a simple proposes way just need create their(plugging) resources on the or PPlegacy and copy a pieceas ofdemonstrated common code to gateways (e.g., an Arduino for accessing new devices, intheir Figure 1: board or PC), and then fill in the URI of the resources they created. After that, the data can be

(1)

(2)

(3)

The uploaded individual thatthrough just want to using accessthe their devices in a simple way just need to create or users retrieved the PP resource’s URI. (2) Ifresources the resource want to access multiple heterogeneous devices, they can(e.g., use the their on providers the PP and copy a piece of common code to their gateways anUDA Arduino to or shield of devices without interfering with the After legacythat, Wireless Sensor board PC),the andheterogeneity then fill in the URI of the resources they created. the data can be Networks (WSNs). The UDA will expose service endpoints directly or upload well-formed data uploaded or retrieved through the PP using the resource’s URI. to the PP. If the resource providers want to access multiple heterogeneous devices, they can use the UDA to (3) Commercial users can use the UDA to shield the heterogeneity of devices. Then, the LPs expose shield the heterogeneity of devices without interfering with the legacy Wireless Sensor Networks the resource capabilities as light-weighted services and publish the resources that the users want (WSNs). ThetoUDA will expose service endpoints directly or upload well-formed data to the PP. to share the PP. Commercial users can use the UDA to shield the heterogeneity of devices. Then, the LPs expose After that, users can open the resource, meanwhile, flexibly configure the degree of openness the resource capabilities as light-weighted services and publish the resources that the users want using the DML. For example, they can configure whether others can direct access their LPs or only to share theuploaded PP. access thetodata to the PP. They can also configure to share the aggregation or summary of data periodically, rather than sharing the entire data stream. Moreover, the framework provides

After that, users can open the resource, meanwhile, flexibly configure the degree of openness multiple interaction modes according to the degree of resource openness: using the DML. For example, they can configure whether others can direct access their LPs or only access the data uploaded to the PP. They can also configure to share the aggregation or summary

Sensors 2016, 16, 1596

5 of 21

of data periodically, rather than sharing the entire data stream. Moreover, the framework provides multiple interaction modes according to the degree of resource openness: Sensors 2016, 16, 1596 5 of 21 (1) (1) Centralized mode: As indicated by the thick blue lines of Figure 1, consumer applications Centralized mode: As indicated by the thick blue lines of Figure 1, consumer applications retrieve data from from distributed distributeddevices, devices, using push-based retrieve data fromthe thePP, PP,which whichcollects collects data data from using push-based (publish/subscribe) methods. a data broker (publish/subscribe)or orpull-based pull-based (request/response) (request/response) methods. TheThe PP PP actsacts as aasdata broker center which is responsible for all data uploading and retrieving. center which is responsible for all data uploading and retrieving. Peer-to-Peer mode: Thismode modeisisindicated indicated by It is “peer-to-peer” (2) (2) Peer-to-Peer mode: This bythick thickgreen greenlines. lines. It aisdistributed a distributed “peer-to-peer” interaction mode whichresource resource consumers consumers directly resources from thethe LPsLPs (or UDA) interaction mode ininwhich directlyaccess access resources from (or UDA) of other domains. The LPs expose their resources in Restful interfaces. Consumers can invoke of other domains. The LPs expose their resources in Restful interfaces. Consumers can invoke their resources using standard URI, mode avoids delay from devices to the their resources using standard URI, andand thisthis mode avoids thethe delay from thethe devices to the public public platform. In this mode, the PP acts as a directory or rendezvous of resources, which platform. In this mode, the PP acts as a directory or rendezvous of resources, which provides the provides the discovery and selection services for users to obtain the access address of their discovery and selection services for users to obtain the access address of their desired resources. desired resources. (3) (3) Local mode: Demonstrated theapplications applicationsofof resource providers invoke Local mode: Demonstratedby bythick thick red red lines, lines, the resource providers invoke theirtheir own resources on on toptop of of thethe LPLP within ananapplication ofthis thismode modemay may be own resources within applicationdomain. domain. The The interaction interaction of within an intranet, which avoids the delay or unreliability of external network. The LP provides be within an intranet, which avoids the delay or unreliability of external network. The LP real-time andreal-time extensibility for applications the domain. applications provides and services extensibility services for within applications within These the domain. These can can provided also use resources by other application using above two alsoapplications use resources by otherprovided application domains using domains the above twothe modes. modes.

Web Server

Publish/Subscribe

Service Service Endpoint Endpoint SE Manager

Resource Dynamic Creation

DML Execution Engine

Resource Directory Entity Directory

Query Resolver

Mapping Requirements Repository

Context Rule

Model Directory

Rule Engine

Similarity Measurement Cache

Model Reverse Evolution

Semantic Matching

Lifecycle Management

Information Interpretation

Event Generator

Information&Resource Model

History Event

Model Event Archive Context Data Observation Data

Internet

Intranet

Intranet

Figure 2. Internal architecture of resource platform.

Figure 2. Internal architecture of resource platform.

Figure 2 shows the internal architecture of the resource open platform. When a new resource Figure it, 2 shows the internal architecture oftothe When a in new resource accesses its resource description is published theresource platform,open whichplatform. will be maintained resource accesses it, The its resource description is published the platform, which will be maintained model. Event Generator generates a new to resource accessing model event and sends itintoresource the Model Event Archive. We call the relationship a resource provides services (observe or control) model. The Event Generator generates a new that resource accessing model event and sends it to the for Event an entity as resource-entity For each the platform uses the Model Directory to Model Archive. We call thebinding. relationship thatrequest, a resource provides services (observe or control) find out which resources can provide information for the requested entity attributes. If no resource for an entity as resource-entity binding. For each request, the platform uses the Model Directory to associated to an entitycan attribute, theinformation framework first to use available information and findisout which resources provide for tries the requested entitycontext attributes. If no resource resource descriptions to establish new associations. The Query Resolver analyzes and transfers the is associated to an entity attribute, the framework first tries to use available context information and requested resource descriptions to an abstract resource description (ARD). The Semantic Matching resource descriptions to establish new associations. The Query Resolver analyzes and transfers the calculates the matching degree based on the entity and resource model. Then it recommends the requested resource descriptions to an abstract resource description (ARD). The Semantic Matching matching resources to the user. User chooses the desired resource, and then this resource will be calculates thethe matching degree basedtoon the entity and resource Then it recommends bound to entity which is needed monitor. The binding has twomodel. meanings: when users want to the

Sensors 2016, 16, 1596

6 of 21

matching resources to the user. User chooses the desired resource, and then this resource will be bound to the entity which is needed to monitor. The binding has two meanings: when users want to observe or change the state of entities, the platform can invoke corresponding resource services; observation data can be interpreted to context data and events. The UDA transfers the heterogeneous raw data into the resource-level observation data, and then the Information Interpretation promotes the observation data to the context data based on the entity model and resource-entity binding. Next, the Event Generator will generate events if the data match the event conditions. The same observation Sensors 2016, 16, 1596 6 of 21 data can be interpreted to multiple meanings according to the entity model which represents the or change the state of entities, the platform can invoke corresponding resource services; context ofobserve the application scenario. observation data can be interpreted to context data and events. The UDA transfers the heterogeneous rawDevice data into the resource-level observation data, and then the Information Interpretation promotes 4. Unified Access the observation data to the context data based on the entity model and resource-entity binding. Next, Existing platforms like and io.bridege onevent the solutions for same accessing new devices. the Event Generator willCosm generate events if the data focus match the conditions. The observation be interpreted to multiple meanings according to thelegacy entity model which represents However,data theycan lack a well-defined solution for the integrating devices which play antheimportant context of the application scenario.

role but are quite fragmented and heterogeneous. A device adaptation mechanism is required to access these devices and normalize the way they are exposed via a unified API to applications, and it should 4. Unified Device Access have an application-independent way of describing the capabilities regardless of whether it is a ZigBee Existing platforms like Cosm and io.bridege focus on the solutions for accessing new devices. or Z-Wave device. they Thatlack is exactly what the UDAfor hasthe in integrating place. However, a well-defined solution legacy devices which play an As shown in role Figure can integrate devices onadaptation one side and expose important but 3, areitquite fragmented heterogeneous and heterogeneous. A device mechanism is uniform required toon access these devices and responsibility normalize the way are access exposedincludes via a unified service interfaces the other side. The of they device not API onlytoadapting applications, and it should have an application-independent way of describing the capabilities the protocols but also adapting the device capabilities, publishing the resources, and following up regardless of whether it is a ZigBee or Z-Wave device. That is exactly what the UDA has in place. resource lifecycle management, hasheterogeneous the following characteristics: As shown in Figure 3, etc. it canUDA integrate devices on one side and expose uniform (1)

(2)

service interfaces on the other side. The responsibility of device access includes not only adapting the

The protocols protocol stack is plug and play (PnP); capabilities, based on thepublishing dynamicsthe of OSGi and and the dependency-inversion but also adapting the device resources, following up pattern, UDA is flexible and High Cohesion Low characteristics: Coupling (HCLC) device protocols within resource lifecycle management, etc. UDA has theand following UDA encapsulated bundles. Protocols can be deployed or re-deployed (1)are The protocol stackasisstandard plug and OSGi play (PnP); based on the dynamics of OSGi and the dynamically. Moreover, protocol stacks can also assembled andLow re-assembled dynamically, dependency-inversion pattern, UDA is flexible andbe High Cohesion and Coupling (HCLC) device protocols withinaUDA encapsulated as standard bundles.jar Protocols be protocol as shown in Figure 4. After new are protocol is deployed fromOSGi a compiled file or acan new deployed or re-deployed dynamically. Moreover, protocol stacks can also be assembled and stack is assembled by several protocols, it registers itself to the protocol stack manager. Then this re-assembled dynamically, as shown in Figure 4. After a new protocol is deployed from a new protocol stack will immediately work to parse real-time packets without restarting the program. compiled jar file or a new protocol stack is assembled by several protocols, it registers itself The entire process is PnP of the adaptation continuity of work othertoprotocols. to the protocol stackwithout manager.influencing Then this new protocol stack will immediately parse real-time protocols packets without the program. The entire process is PnPdevice withoutaccesses, influencing Heterogeneous can restarting be adapted automatically. When a new the device the adaptation of other protocols. packet isofpublished oncontinuity the protocol bus. Then, every protocol stack will check whether it can (2) Heterogeneous protocols can be adapted automatically. When a new device accesses, the device correctlypacket parseis the packet. If a protocol stack is able to parse the packet, it will respond to the published on the protocol bus. Then, every protocol stack will check whether it can protocolcorrectly stack manager. The protocol stack bindsthe the device with an instance parse the packet. If a protocol stackmanager is able to parse packet, it will respond to the of this stack manager. The protocol stack manager binds the device that with an of this different protocolprotocol stack using an identifier (such as TCP/UDP port number) caninstance distinguish stack using by an identifier (such stack. as TCP/UDP number) that can distinguish different devices protocol and is accessible the protocol Then,port packets provided by this device are handed devices and is accessible by the protocol stack. Then, packets provided by this device are handed over to this instance for parsing. over to this instance for parsing.

Figure 3. Internal structure of UDA.

Figure 3. Internal structure of UDA.

Sensors 2016, 16, 1596 Sensors 2016, 16, 1596

7 of 21 7 of 21

Highest Layer

Downstream Upstream Protocol Protocol Parsing Encapsulation

Protocol

Protocol

Protocol Protocol Protocol

Layer Protocol

Protocol

Downstream Upstream Protocol Protocol Parsing Encapsulation Protocol Dynamically Added Layer

Downstream Upstream Protocol Protocol Parsing Encapsulation

Protocol Protocol Protocol

Protocol

Protocol

Lowest Layer Assembling Protocol Stack

Deployed Protocols in UDA

Figure The dynamic Figure 4. 4. The dynamic assembling assembling of of protocol protocol stack. stack.

Figure 5 shows a simplified example of selecting a suitable protocol stack to parse the packet P1 correctly. a byte length of the datagram. An orange blockblock denotes the data correctly.Each Eachsmall smallblock blockrepresents represents a byte length of the datagram. An orange denotes the payload part part of the andand a blue block denotes the metadata data payload of datagram, the datagram, a blue block denotes the metadatapart partofofthe thedatagram, datagram, which includes Check (CRC), etc. Assuming that, a new packet P1 includes the the header, header,tail, tail,length, length,Cyclic CyclicRedundancy Redundancy Check (CRC), etc. Assuming that, a new packet is fromfrom port 9002 socket and five and protocol (S1–S5) are(S1–S5) assembled P1transmitted is transmitted port using 9002 ausing a connection, socket connection, five stacks protocol stacks are in the UDA. encapsulated by two protocols, the header ofheader the underlying protocol protocol is 0 × 1 is B assembled inP1 theisUDA. P1 is encapsulated by two protocols, the of the underlying and header the upper is 0 × 2 B. P1When arrives, is published to every protocol 0 × 1the B and the of header of theprotocol upper protocol is When 0 × 2 B. P1 itarrives, it is published to every stack concurrently, then eachthen protocol stack tries to parse P1parse usingP1itsusing protocols. The yellow boxes protocol stack concurrently, each protocol stack tries to its protocols. The yellow stand thefor parsing results results of every stack. S1 failsS1 to fails parsetoP1 because S1’s protocol header boxes for stand the parsing ofprotocol every protocol stack. parse P1 because S1’s protocol (0 × 2 A) notdoes match with P1’s with header (0 × 1B). The andheader tail of S2’s can header (0does × 2 A) not match P1’s header (0 ×header 1B). The andunderlying tail of S2’sprotocol underlying match with then with it uses itsthen protocol to its parse P1 andtoextracts theand data payload is the complete protocol canP1, match P1, it uses protocol parse P1 extracts thewhich data payload which packet of the upper protocol S2. Next, S2 uses itsNext, upperS2 protocol parseprotocol the packet extracted by the is the complete packet of theof upper protocol of S2. uses itsto upper to parse the packet previous andprevious the resultstep, is false because the header (0 because × 3C) of the S2’sheader upper protocol not match extractedstep, by the and the result is false (0 × 3C)does of S2’s upper with P1’sdoes upper protocol (0 × 2B). protocol not match with P1’s upper protocol (0 × 2B). S3 uses its underlying and upper protocols to parse P1. The headers of S3 are consistent with P1 but S3 still cannot parse P1 correctly because P1’s packet length is shorter than S3’s protocol format. format. Similarly,S4 S4also alsofails failstotoparse parse because length of the result using to parse is longer Similarly, P1P1 because thethe length of the result thatthat using S4 toS4parse P1 isP1 longer than thanprotocol S4’s protocol format. Finally, only S5 istoable to P1 parse P1 correctly because the packet and S4’s format. Finally, only S5 is able parse correctly because the packet format format and packet packet of length ofprotocol every protocol (from bottom top) are consistent. Then S5 parses and extracts the length every (from bottom to top)toare consistent. Then S5 parses and extracts the data data payload P1,translates and translates it into application-available numerical values. In addition, the payload of P1, of and it into application-available numerical values. In addition, the protocol protocol stack binds manager binds with the that can recognize of P1. itInbinds this stack manager S5 with the S5 identifier thatidentifier can recognize the source of P1.the In source this example, example, it 9002 bindstosocket port 9002 toall S5.packets Subsequently, all9002 packets from port 9002 are pushed to S5 socket port S5. Subsequently, from port are pushed to S5 directly. directly. If no protocol stacks can parse this packet, the UDA tries to dynamically assemble a protocol stack nothe protocol stacks can parse this packet, the UDA tries 1, tothe dynamically assembleare a protocol basedIfon deployed protocols. As demonstrated in Algorithm deployed protocols divided stack basedlayers. on theThe deployed protocols. demonstrated Algorithm 1, the deployed protocols to different protocol stacks areAs consisted of theseindeployed protocols (illustrated in Figureare 4). divided to different layers. publishes The protocol are consisted of theseprotocols. deployedIfprotocols (illustrated The protocol stack manager thisstacks raw packet to the deployed a protocol can parse in Figure 4). The protocol stack manager thisof raw to the deployed If a the outermost layer protocol, it becomes thepublishes lowest layer thepacket new protocol stack, andprotocols. then it parses protocol can parse outermost layerpayload protocol,part. it becomes the lowest layerpart of the protocol the raw packet andthe extracts the data Sequentially, the data is new published to stack, other and then it parses the raw packet and extracts the data payload part. Sequentially, the data part is published to other deployed protocols that lie in upper layers. At last, the other layers and the highest

Sensors 2016, 16, 1596

8 of 21

deployed protocols that lie in upper layers. At last, the other layers and the highest layer of the new protocol stack can be assembled by the similar process. If no suitable protocol stack can be assembled, the framework returns the failure result which means that existing deployed protocols cannot parse this packet and new protocols are required. The process is automatic without human intervention.

Figure 5. An example of judging which protocol stack could correctly parse a new packet.

Algorithm 1. Automatic protocol adaptation. Input: unknown packet: packet Output: matched protocol stack: ps Initialize: PS={ps1,ps2, . . . ,psi, . . . ,pss} //existing protocols stacks PC={layer1 ,layer2 , . . . ,layeri , . . . ,layerl } //protocol container Layer = {pi1 ,pi2 , . . . ,pij , . . . ,pip } //deployed protocol in layer i SendToBus(packet); //send packet to protocol bus for all protocol stack ps ∈ PS do if (psi .PsAnalyze(packet)==1){Return psi ;} end for Ps ps=new PS(); for all protocol p ∈ layer1 do boolean b=TryAnalyze(packet,ps,p,layer1 ); if (b==false){Return Null;} Return ps; end for Procedure TryAnalyze(packet,ps,p,layer) //p is located in layer payload =p.Analyze(packet); If (payload==0){ ps.push(p); Return true;} layeri = layer.next(); If (layeri == null){Return false;} For all protocol pi ∈ layeri do boolean b = TryAnalyze(payload,ps,pi ,layeri ); if (b == true){ ps.push(p);} 1 Return b; end for Procedure End

Sensors 2016, 16, 1596

9 of 21

Device identification and information can be extracted from the access packet. The Service Endpoint (SE) Manager initializes a resource instance and assigns a resource URI for the device according to the corresponding resource template. This URI is the identification and the access address of resources. Then it puts the extracted device information into the resource description, it is well-defined according to resource model. After that, “southbound” heterogeneous devices are exposed “northbound” in a uniform resource way. A device can provide one or several resources, for example a sensor node may provide temperature and humidity data, then it can be exposed as temperature sensing and humidity sensing two resources. In this case, the resource need to bind with the two level identifications of device, for example device ID plus the I/O channel number of each sensor or Programmable Logic Controller (PLC) number plus register address. However, the resource instance is not yet complete because the auto-generated information is limited. Then the device providers should further describe it, and attach some metadata or semantic information which is used for following resource discovery and data interpretation. Based on multiple dimensional model described in Section 5, resources are further described with the help of resource modeling tool which is developed by us based on OWL-API [17] and Java SWT [18]. By modeling tool, the information of every feature dimension is described, includes the inheritance hierarchy of resource class, class axioms (subclass, equivalent, disjoint), class description (intersection, union and complement), resource’s property (data property and object property), resource relationship, restriction (value constraints, cardinality constraints), etc. The detail of modeling tool is out of the scope of this paper, and will be discussed in our future works. After that, the resource’s information of every dimension is described, such as measurement principle dimension, measure quantity type, operating and survival range, spatial dimension, and so on. The UDA publishes the described resource to the platform, and then the resource publication process is finished. When the data packet that is provided by the device arrives, the protocol stack parses it and extracts the data part, and then processes it to form the unified-format raw data according to its offset and factor. The raw data are the lowest layer of the information model and contains the value observed by the device. It can be enhanced with metadata, e.g., units, the observed resource and QoI (Quality of Information) described in the resource model. Then the device-level raw data are promoted to resource-level observation. Moreover, if the data match an event condition, the Event Generator creates and publishes the corresponding data event. The Resource Manager monitors the lifecycle state of resources. The resource is periodically synchronized with the actual state of the physical device via its protocol stack. The state information is represented as a model event. A “resource failure” event will be caused by the failure of the device. 5. Multilevel and Multidimensional Model The information provided by resources ranges from the raw data, to observation data containing meta-information, to the high-level context information required by specific applications and services [19]. Given the heterogeneity and fragmentation of the IoT environment, a well-defined resource model is necessary to shield the heterogeneity of device descriptions and represent them in a unified resource form. The MM model is proposed to decouple the upper applications from the lower devices. It defines the characteristics and properties of the resources, which include the information such as the physical properties being observed, the properties of the resource’s measurement capability; the physical structure on which the resource is deployed and its location; and the environmental conditions under which the resource is expected to operate. By association of resources and context, data can be interpreted to context information, and application-related meaning can be extracted within the context. As shown in Figure 6, one characteristic of the model is that it is multilevel. It decouples the resource model from the entity model, which enables the decoupling of applications from the devices from the data model’s viewpoint. Application developers only need to concentrate on the entities and application logic, and describe the entities and domain knowledge. The entity model contains the

Sensors 2016, 16, 1596

10 of 21

properties of the entities and their relationships. Moreover, different applications can interact with 10 of 21 each other by linking their entities in respective entity model. Meanwhile, resource providers only need to scenario can bebe related to need to describe describe their theirresources. resources.Therefore, Therefore,the theentity entitymodel modelofofananapplication application scenario can related multiple resource models, it means that applications can share resources at the model level. However, to multiple resource models, it means that applications can share resources at the model level. neither independent level is sufficient resourcefor selection data interpretation, it needs both However, neither independent level isfor sufficient resourceand selection and data interpretation, it resource and entity information. The platform combines multilevel information using Linked Data needs both resource and entity information. The platform combines multilevel information using and common ontology, suchontology, as Quantities (QU) and ontology andontology Climate [20] and Forecast (CF) Linked Data and common suchand as Units Quantities Units[20] (QU) and Climate features [21] ontology. and Forecast (CF) features [21] ontology. Sensors 2016, 16, 1596

Figure Figure 6. 6. Multilevel Multilevel and and multidimensional multidimensional model. model.

Multi-dimension is another characteristic of the model. We extract critical feature dimensions of Multi-dimension is another characteristic of the model. We extract critical feature dimensions of resources to construct multidimensional models. Each dimension constructs a model independently resources to construct multidimensional models. Each dimension constructs a model independently which includes the taxonomy, property, description and restrictions of this dimension. For example, which includes the taxonomy, property, description and restrictions of this dimension. For example, the dimensions of measurement principle, measure quantity type, measure capability, operating and survival the dimensions of measurement principle, measure quantity type, measure capability, operating and survival range, and spatial are extracted to construct the resource model. range, and spatial are extracted to construct the resource model. The measurement principle indicates the physical effects that can be used to convert stimuli The measurement principle indicates the physical effects that can be used to convert stimuli into electric signals. For example, the measurement principle of capacitive water-level sensor is into electric signals. For example, the measurement principle of capacitive water-level sensor is dielectric-constant. According to the measurement principle, users can select the appropriate dielectric-constant. According to the measurement principle, users can select the appropriate resources resources that are more suitable for their application context. For example, magnetic sensors that are more suitable for their application context. For example, magnetic sensors (according to (according to magnetism) are often used to detect motion, displacement, position, and so forth. But magnetism) are often used to detect motion, displacement, position, and so forth. But they are they are unsuitable for a magnetic interference scenario. Quantity type indicates the physical quantity unsuitable for a magnetic interference scenario. Quantity type indicates the physical quantity that the that the sensor observes. For example, an air-filled capacitor may serve as a relative humidity sensor sensor observes. For example, an air-filled capacitor may serve as a relative humidity sensor because because moisture in the atmosphere changes air electrical permittivity. Thus, the quantity type is moisture in the atmosphere changes air electrical permittivity. Thus, the quantity type is atmosphere atmosphere humidity. The quantity type model is built based on the W3C CF-feature ontology, which humidity. The quantity type model is built based on the W3C CF-feature ontology, which is an ontology is an ontology representation of the generic features defined by Climate and Forecast standard representation of the generic features defined by Climate and Forecast standard vocabulary. vocabulary. The measure capability dimension is the key performance indicator of resources for selection. The measure capability dimension is the key performance indicator of resources for selection. It It consists of concepts such as frequency, accuracy, drift, and sensitivity. As the operation of resources consists of concepts such as frequency, accuracy, drift, and sensitivity. As the operation of resources will be affected by the observable conditions, the model describes the measuring capabilities of the will be affected by the observable conditions, the model describes the measuring capabilities of the resources under certain conditions. It can be used to define how a sensor will perform in a particular resources under certain conditions. It can be used to define how a sensor will perform in a particular context to help characterize the quality of the sensed data. The operating and survival range dimensions context to help characterize the quality of the sensed data. The operating and survival range dimensions describe the characteristics of the environment and other conditions in which the sensor is intended to describe the characteristics of the environment and other conditions in which the sensor is intended operate. Operating range describes the environmental conditions and attributes of a resource’s normal to operate. Operating range describes the environmental conditions and attributes of a resource’s operating environment. Survival range describes the environmental conditions wherein a sensor normal operating environment. Survival range describes the environmental conditions wherein a can be exposed without causing damage. Spatial location is an indispensable criterion for resource sensor can be exposed without causing damage. Spatial location is an indispensable criterion for matching and determines whether the location of entity is located in the observation area of resource. resource matching and determines whether the location of entity is located in the observation area of The location is defined in terms of the geographical coordinates (hasLatitude, hasLongitude). It also has resource. The location is defined in terms of the geographical coordinates (hasLatitude, hasLongitude). It also has properties that link to a symbolic location, which has two types: global and local location ontologies. The global location ontology URI links the resource to existing high level location ontologies e.g., GeoNames [22,23], which provides the toponymies or names of cities,

Sensors 2016, 16, 1596

11 of 21

properties that link to a symbolic location, which has two types: global and local location ontologies. The global location ontology URI links the resource to existing high level location ontologies e.g., GeoNames [22,23], which provides the toponymies or names of cities, districts, countries, etc. The local location ontology provides a detailed location description that captures indoor location concepts representing objects such as buildings, rooms, or other premises. With the help of the multi-dimensional resource model, the information of plugged resource will be described clearly in each feature dimension, and it enables selecting resource and establishing resource-entity mapping automatically based on based on semantics. The platform selects suitable resources by calculating the semantic similarity between resource and entity (or resource request). It transfers resource requests to structured ARD. ARD includes the monitored entity, properties of it, and the type (mandatory or optional) of every required feature dimension. The semantic matching can accurately measure the similarity of each dimension in parallel. Users can aggregate the measurement values of different dimensions according to their preferences (e.g., set different weights on different dimensions). After a user selects the most preferred one from the recommended resource list which includes resources with the highest semantic similarity with ARD, the mapping of the monitored entity and the selected resource is built, namely resource-entity binding. Users need only to operate the properties of entity (get/put) to get required data or control their state, while the details of resource invoking are transparent to them. The information of different layers is possibly converted from one to the other through mappings and transformations. The mapping of observation into context information is done explicitly at the time of information delivery by assigning the observation to the attribute value of entity. For instance, the average temperature is an attribute of a room and its value comes from averaging all values of sensors in the room. 6. Data Manipulation Language In order to flexibly configure the degree of openness, we propose DML which is a light-weighted declarative language. Using DML, resource providers can configure the level of opening information (raw data, resource level, entity level), the granularity of opening data (summarized or detail), and the method of data aggregation (operation and time granularity). Moreover, users can dynamically create resource compositions base on existing resources. Table 1 depicts the grammar of DML. Users can create new resources which act as proxy or aggregation resources for existing resources. Then they use DML to configure the resource content by referencing and aggregating the resource-level (resource observation) or entity-level (entity property) information. Table 1. Summary of the DML grammar. Type

Statement

Description

Resource declaration

@ResourceName:”ResourceURI”

Declaring a resource newly created. Generally, this is a virtual resource which will be assigned by an aggregation result of real resources.

Resource reference

“ResourceURI,Input Type, [PropertyURI]”

Define a variable by referencing a resource, ResourceURI is the URI of resource, input type indicates the type of data, including digital and analog. If the data source is entity level, PropertyURI indicates a property of this entity.

Resource validity time

[YYYY-MM-DD]T[hh:mm:ss]Z

The lifetime of resources.

Aggregation time period

[DD/hh/mm/ss]:time value

Four level time granularities of aggregation: day level, minute level, hour level and second level.

Aggregation operation

[min/max/ave/med/cur]

Operation of aggregation: minimum, maximum, average, median and latest value.

Math operations

operations

+, -, *, /, %, ˆ, sqrt(x)

Add, Subtract, Multiply, Divide, Modulo, Exponent, Square root

Trigonometric Functions

sin(x), cos(x), tan(x)

Sine, Cosine, Tangent

Inverse Trigonometric

asin(x), cos(x), atan(x)

Arcsine, Arccosine, Arctangent

Sensors 2016, 16, 1596

12 of 21

Table 1. Cont. Type

Math operations

Statement

Description

Exponentials

exp(x), sinh(x), cosh(x)

Exponential Function (eˆx), Hyperbolic Sine, Hyperbolic Cosine

Logarithms

ln(x), log2(x) log(x)

Natural (Log Base e), Binary (Log Base 2), Common (Log Base 10)

Rounding

ceil(x), floor(x)

Ceiling, Floor

Switch count

Switch()

Count the switching number of digital resource.

Prefixes and suffix

“prefix:[prefix]” “suffix:[ suffix]”

Attach the prefix or suffix of data, for example the unit of data can be a suffix.

Formatting

“format:[decimal places]”

This operation formats the end result by rounding the result to a specified number of digits.

Figure 7 shows an example of DML from the scenario of heating discussed in Section 7.1 (case study). For simplicity sake, the calculation formulas are simplified and the IPs are internal addresses for testing. In the example, the IP of PP server is 192.127.0.1; two LPs with the IP: 192.127.0.2 (LP2) and 192.127.0.3 (LP3) respectively. LP2 provides two analog resources: inTemperature1 (http://192.127.0.2/resource/inTemperature1), observing supply water temperature. outTemperature1 (http://192.127.0.2/resource/outTemperature1), observing outlet water temperature. It also provides an entity: testroom1 (http://192.127.0.2/entity/testroom1) which represents a monitored room in the heat metering scenario. LP3 provides a digital resource: 10 relay1 (http://192.127.0.3/resource/relay1) which indicates the status of relay1. Users create proxy or aggregation resources on the PP. As shown in number 1 of Figure 7, it declares the resource switch_Count to represent for the resource (http://192.127.0.1/resource/relay1) newly created on the PP. switch_Count is a summary of relay1, which counts the number of state switching between close and open every five minutes. As shown in number 2, med_In_Water (http://192.127.0.1/resource/inTemperature) acts as an aggregation resource for inTemperature1, which gets the median value of inTemperature1 every 1 hour. Its unit is Celsius degree, and the value has two digits after the decimal point. Number 3 shows the definition of two constants: the specific heat capacity (SHC) of liquid water (cap_Water) and the approximate SHC of air (cap_Air). Then we reference the properties of testroom1 entity: dTemperature represents the difference-in-temperature of the room after heating; roomVolume represents the volume of the room. Then we use them to calculate the available heat used for room heating, which is shown in number 4. As shown in number 5, by aggregating the observation of outTemperature1 and inTemperature1 we can get the average heat loss of supply water, which is declared as: ave_loss_heat (http://192.127.0.1/resource/lossHeat). Its unit is Joule, and the value has three digits after the decimal point. As number 6 shows, users can dynamically create a composition resource (http://192.127.0.1/resource/untilizationFactor) based on the existing resource and entity information. It is declared as untilization_factor representing the effective utilization rate of heat, which is calculated by the ratio of utilization heat to loss heat (supply water).

Sensors 2016, 16, 1596

13 of 21

Sensors 2016, 16, 1596

13 of 21

Figure 7. An example of DML from a heating scenario. Figure 7. An example of DML from a heating scenario.

7. Case Study and Evaluation 7. Case Study and Evaluation 7.1. Case Study 7.1. Case Study Figure 8 shows the District Heating Control and Information System (DHCIS) and the Coal Mine Figure 8 shows the District Heating Control and Information System (DHCIS) and the Coal Mine Comprehensive Monitoring and Early Warning System (CCMWS) developed by us, which are in Comprehensive (CCMWS)scenarios developedtoby us, which are in actual actual use. InMonitoring this section,and weEarly will Warning use theseSystem two application demonstrate how the use. In this section, we will use these two application scenarios to demonstrate how the framework framework works. They access monitoring resources and systems to realize the goal of perception works. They access monitoring resources and systems to realize the goal of perception information information sharing, comprehensive treatment and analysis, and cross systems service coordination. sharing, comprehensive treatment and analysis, and cross systems service coordination. The DHCIS The DHCIS will cover 120 heating districts and include 200 monitoring boiler rooms and heat transfer will cover 120 heating The districts and include monitoring boiler and sensors, heat transfer stations in Beijing. perception layer 200 resources include heatrooms metering flow stations sensors, in Beijing. Thesensors, perception layer temperature resources include heatThe metering sensors, flowthe sensors, pressure sensors, pressure and room detectors. CCMWS integrates resources and data of and room temperature detectors. The CCMWS integrates the resources and data of the application the application domain to ensure the safety of coal mine production and to realize office automation. The devices of these fragmented and heterogeneous. Theoffice DHCIS scenario hasThe hundreds domain to ensure the scenarios safety of are coal mine production and to realize automation. devices of devices provided by different and manufacturers whichThe support different protocols such asofH7000, of these scenarios are fragmented heterogeneous. DHCIS scenario has hundreds devices Modbus, IHDC and M-bus, etc. Initially, the systems of these scenarios are developed by multiple provided by different manufacturers which support different protocols such as H7000, Modbus, IHDC manufacturers and isolated from each other.scenarios The resources and data are in manufacturers each system, and and M-bus, etc. Initially, the systems of these are developed by closed multiple and cannotfrom be shared. The LPs provide an infrastructure inner-domain resources sharing isolated each other. The resources and data arefor closed in each system, and cannotand be reuse shared. based on resource model and entity For example, report systemand interacts The LPs provide an infrastructure for model. inner-domain resources sharing reuse with basedmonitoring on resource system and billing system by temperature and water flow observation resources, heat metering and model and entity model. For example, report system interacts with monitoring system and billing consumer entities to share data and generate reports. system by temperature and water flow observation resources, heat metering and consumer entities to The PP provides the open platform for cross-domain resource sharing and interaction. For share data and generate reports. example, by customizable openness, heating offices can supervise the real-time heating situation of The PP provides the open platform for cross-domain resource sharing and interaction. For example, the heating companies; coal mine department can monitor the safe condition of mining. Meanwhile, by customizable openness, heating offices can supervise the real-time heating situation of the heating some data can be opened to meet the needs of the public, such as showing the heating data to companies; coal mine department can monitor the safe condition of mining. Meanwhile, some data can consumers. With the development of IoT, more and more domains will open their resources and data, be then opened to meet novel the needs of the public, showing the heating consumers. With the numerous applications, DIY such appsasand app stores, whichdata are to based on real-world development of IoT, more and more domains will open their resources and data, then numerous novel resources, will emerge. applications, DIY apps and app stores, which are based on real-world resources, will emerge.

Sensors 2016, 16, 1596

14 of 21

Sensors 2016, 16, 1596

14 of 21

Sensors 2016, 16, 1596

14 of 21

Figure Figure 8. 8. Framework Framework demonstration. demonstration.

To demonstrate the frameworkFigure more8.intuitively, the following part of this section presents some Framework demonstration. To demonstrate the framework more intuitively, the following part of this section presents practical examples of DHCIS. Most of the protocols in DHCIS are industrial control protocols. For some practical examples of DHCIS. Most of the protocols in DHCIS are industrial control protocols. demonstrate framework more intuitively, following part ofcommunications this section presents some and simplicity To sake, we only the present two protocol layers:the (data terminal) protocol For simplicity sake, we only present two protocol layers: (data terminal) communications protocol and practical As examples of thea protocols in DHCIS arethe industrial control protocols. For it to data protocol. shownofinDHCIS. FigureMost 9, when new packet arrives, protocol manager publishes data protocol. As shown in Figure 9, when a new packet arrives, the protocol manager publishes it simplicity sake, we only present two protocol layers: (data terminal) communications protocol and all communication protocols. As the process discussed in Section 4, the H7000 protocol can parse this to all communication protocols. As the process discussed in Section 4, the H7000 protocol can parse data protocol. As shown in Figure 9, when a new packet arrives, the protocol manager publishes it to packet, and then it becomes the lowest layer of the assembling protocol stack. Then it parses the this packet, and then it protocols. becomesAs thethe lowest layer of theinassembling stack.can Then it parses the all communication process discussed Section 4, theprotocol H7000 protocol parse this packetpacket, and extracts theit data part.the Sequentially, the data part is published to deployed protocols lie in and then becomes lowest layer the of the assembling protocol stack. Then it parses the lie in packet and extracts the data part. Sequentially, data part is published to deployed protocols upper packet data layers. The ModBus protocol can parse thispart packet. Then ittobecomes the highest layer of and extracts the data part. Sequentially, thethis data is published deployed in upper data layers. The ModBus protocol can parse packet. Then it becomes theprotocols highest lie layer of the the protocol stack. At last, the ModBus and arepacket. assembled the new protocol stack. upper data layers. The ModBus protocol canH7000 parse this Then itinto becomes the highest layer of The protocol stack. At last, the ModBus and H7000 are assembled into the new protocol stack. The protocol thestack protocol stack.the At last, the and ModBus and H7000 are assembled the the newProtocol protocol stack. protocol parses packet extracts the device ID, andinto then StackThe Manager stack parses the packet and extracts theextracts devicethe ID,device and then thethen Protocol StackStack Manager binds the protocol stack parses the packet and ID, and the Protocol binds the device with one instance of this protocol stack. After that, packets provided Manager by this device devicebinds with the onedevice instance of this protocol stack. Afterstack. that,After packets by thisby device are handed with one instance of this protocol that, provided packets provided this device are handed over to this instance to parse. over toare this instance handed overto to parse. this instance to parse.

Figure9.9.Example Example of stack. Figure ofprotocol protocol stack. Figure 9. Example of protocol stack. As Figure 10 shows, the data terminal sends a H7000 register packet to the UDA. The protocol As Figure 10 shows, the data terminal a H7000 register packet to thethe UDA. The protocol stack framework parses packet. Then,sends it extracts the device information packet As Figure 10 shows, thethis data terminal sends a H7000 register packet tofrom the UDA. Theand protocol stack framework parses description this packet. Then, to it the extracts thetemplate. device As information from the thedevice packet and publishes a resource according resource Figure 11 shows, stack framework parses this packet. Then, it extracts the device information from the packet and provides a temperature resource. The quantity and unit propertyAs areFigure linked 11 to the standard publishes a resource description according to thetype resource template. shows, the device publishes a resource description according tothe theSEresource template. As Figureoperations 11 shows, the device physical quantity of QU ontology. Then, Manager maps the internal of provides a temperature resource. The quantity type and unit property are linked to the the standard provides a temperature resource. The quantity typethe and unit propertyobservation are linkedresource to the standard resources to Restful service interfaces. For SE example, new temperature physical quantity of QU ontology. Then, the Manager maps the internal operations of thecan resources provide a get sensing operation, and then mapsmaps this operation to a getoperations service: physical quantity of QUdata ontology. Then, theSESEManager Manager the internal of the

to Restful service interfaces. For example, the new temperature observation resource can provide a get resources to Restful service interfaces. For example, the new temperature observation resource can http://jinfang/resource/98765432143_1_D104/realtime/latest sensing data operation, and then SE Manager maps this operation to a get service: provide a get sensing data operation, and then SE Manager maps this operation to a get service: http://jinfang/resource/98765432143_1_D104/realtime/latest http://jinfang/resource/98765432143_1_D104/realtime/latest

Sensors 2016, 16, 1596

15 of 21

Sensors 2016, 16,16, 1596 Sensors 2016, 1596

15 of 15 21 of 21

Multi-dimension is another characteristic. We extract some critical feature dimensions of resourcesof of Multi-dimension is isanother characteristic. WeWeextract some critical feature Multi-dimension another characteristic. extract some critical featuredimensions dimensions toresources construct multidimensional models. Each dimension constructs a model independently which to to construct multidimensional models. Each dimension constructs a model independently resources construct multidimensional models. Each dimension constructs a model independently includes the taxonomy, property, description and restrictions of thisof dimension. For example, the which includes thethe taxonomy, property, description and restrictions this dimension. ForFor example, which includes taxonomy, property, description and restrictions of this dimension. example, dimensions of measurement principle, measure quantity type, measure capability, operating and survival range, thethe dimensions of of measurement principle, measure quantity type, measure capability, operating andand survival dimensions measurement principle, measure quantity type, measure capability, operating survival and spatial are extracted to construct the resource model. range, and spatial areare extracted to to construct thethe resource model. range, and spatial extracted construct resource model.

Figure 10. process ofofdevice accessing and resource description. Figure 10.The The process device accessing and resource description. Figure 10. The process of device accessing and resource description. http://jinfang/injection_intake_transfer_2/temperature/set http://jinfang/injection_intake_transfer_2/temperature/set http://jinfang/injection_intake_transfer_2/temperature/latest http://jinfang/injection_intake_transfer_2/temperature/latest

Mapping Mapping Access Control Access Control Protocol Stack Protocol Stack SlaveSlave 01 Address Address Function Function01

01 01

Starting Starting Address Hi Hi06 Address

06

Starting Starting Address Lo Lo14 Address

14

Hi Hi Starting Starting

Address 00 00Address Lo Lo

Number 00 00Number Hi Hi Byte

Byte 02 02 CountCount

Write Write

Points Points Hi Hi 00 00 Points Points Lo Lo 25 25 CRC(l) CRC(l) BDBD CRC(h) CRC(h)5D5D

SlaveSlave 01 01Address Address 10 10Function Function Starting Starting Address 06 06Address

Points 04 04Points Hi Hi D D 00 00 Hi Hi D D 0A0A Lo Lo D D 01 01 Hi Hi D D 02 02 Lo Lo CRC(l) 78 78CRC(l) CRC(h) 5C 5CCRC(h)

Figure 11.Resource-entity Resource-entity binding and resource operation mapping. Figure 11. Resource-entity binding and resource operation mapping. Figure 11. binding and resource operation mapping.

Figure 1212 shows thethe sensing information publishing process. When a data packet arrives, thethe Figure shows sensing information publishing process. When a data packet arrives, Figure 12 shows the sensing information publishing process. When a data packet arrives, protocol stack parses it and extracts thethe data part. “5B” stands forfor thethe real-time value of of resource protocol stack parses it and extracts data part. “5B” stands real-time value resource the protocol stack parses it and extracts the data part. “5B” stands for the real-time value of resource 1_D104, and then thethe protocol stack translates thethe value to to a raw value according to to thethe configured 1_D104, and then protocol stack translates value a raw value according configured 1_D104, and then the protocol stack translates the value to a raw value according to the configured offset and coefficient. Next, thethe protocol stack combines this value with some meta-data, such as as unit offset and coefficient. Next, protocol stack combines this value with some meta-data, such unit offset and coefficient. Next, the protocol stack combines this value with some meta-data, such as and quantity type, and generates thethe observation data according to to thethe resource model. Then, thethe and quantity type, and generates observation data according resource model. Then, unit and quantity type, and generates the observation data according to the resource model. Then, Information InformationInterpretation Interpretationcomponent componentinterprets interpretsthis thisobservation observationdata datato tocontext contextinformation information the Information Interpretation component interprets this observation data to context information according to to thethe resource-entity binding. The same sensing data can bebe interpreted to to different according resource-entity binding. The same sensing data can interpreted different according to the resource-entity binding. The same sensing data can be interpreted to different context context information according to different domain entities. In this example, the data are interpreted context information according to different domain entities. In this example, the data are interpreted information according to different domain entities. In this example, the data are interpreted to the to to thethe temperature property of of injection intake located in in 2#transfer station. temperature property injection intake located 2#transfer station. temperature property of injection intake located in 2#transfer station.

Sensors 2016, 16, 1596

16 of 21

Sensors 2016, 16, 1596 Sensors 2016, 16, 1596

16 of 21 16 of 21

Figure 12. 12. Information Information publishing publishing process. process. Figure Figure 12. Information publishing process.

As demonstrated in Figure 13, we can configure a resource sum_trans_st to expose the As in Figure 13, we a resource to expose the temperature As demonstrated demonstrated Figure 13,can weconfigure can inconfigure a sum_trans_st resource the temperature property ofinentity injection_intake an aggregation way. sum_trans_st It represents to theexpose maximum property of entity injection_intake in an aggregation way. It representsway. the maximum temperature value temperature property of entity injection_intake in an aggregation It represents the maximum temperature value of every day, it means that the user only wants to open the summarized data of of every day, it means that the user wants open the summarized data the of this property instead temperature ofof every it only means that to the user only to open summarized data of this property value instead openday, the detail of every data point of wants it. of open the detail of every data point of it. this property instead of open the detail of every data point of it.

Figure 13. Summary temperature. Figure 13. Summary temperature. Figure 13. Summary temperature.

7.2. Evaluations 7.2. Evaluations 7.2. Evaluations In order to evaluate the performance of the framework, experimental evaluations are necessary. In order to evaluate the performance of the framework, evaluations necessary. The experiments were carried out on an Intel i3-2310Mexperimental processor 2.10 GHz PC,are with 4 GB of In order to evaluate the performance of theCore framework, experimental evaluations are necessary. The experiments were carried out on an Intel Core i3-2310M processor 2.10 GHz PC, with 4 GB of RAM running on 32-bit Windows Professional. All processor servers of2.10 LPs andPC,PP have the same The experiments were carried out on an7Intel Core i3-2310M GHz with 4 GB of RAM RAM running on 32-bit Windows 7 Professional. All servers of LPs and PP have the same specifications. running on 32-bit Windows 7 Professional. All servers of LPs and PP have the same specifications. specifications. UDA is is the the data UDA data source source of of the the platform, platform, so so its its protocol protocol parsing parsing performance performance should should be be evaluated. evaluated. UDA is the data source of the platform, so its protocol parsing performance should be evaluated. Wesimulated simulated500 500data data senders senders to to independently independently send send real real protocol protocol packets. packets. The The protocol protocol stack stack parses We parses We simulated 500 data senders to independently send real protocol packets. The protocol stack parses these packets according to the real parsing process. As shown in Figure 14a, the max parsing speed these packets according to the real parsing process. As shown in Figure 14a, the max parsing speed these packets according tothe themin realisparsing process. As shown in Figure 14a, thepackets max parsing speed is 250 packets-per-second, 117, and the average parsing speed is 183.5 per second. is 250 packets-per-second, the min is 117, and the average parsing speed is 183.5 packets per second. is 250 Figure packets-per-second, the the minaverage is 117, and the average parsing speed is 183.5 From 14b, we we find find that that parsing timeof ofeach eachpacket packet 5.45 ms.packets per second. From Figure 14b, the average parsing time isis5.45 ms. From Figure 14b, we find that the average parsing time of each packet is 5.45 ms.

17 of 21

Sensors 2016, 16, 1596 Sensors 2016, 16, 1596

17 of 21 17 of 21

260 260 240 240 220 220 200 200 180 180 160 160 140 140 120 120 100 100 0 0

x 10 -3 9 x 10 -3 9 Average Analysis Time (s) (s) Average Analysis Time

Packets perper Second Packets Second

Sensors 2016, 16, 1596

8 8 7 7 6 6 5 5 4 4

20 20

40 40

60 80 100 120 140 160 180 200 60 80 Scenario 100 120 Elaspsed time 140 (s) 160 180 200 Elaspsed Scenario time (s)

(a) (a)

0 0

20 20

40 40

60 80 100 120 140 160 180 200 60 80 Scenario 100 120 Elaspsed time 140 (s) 160 180 200 Elaspsed Scenario time (s)

(b) (b)

Figure 14. 14. (a) (a) parsed parsed packets packets per per second; second; (b) (b) average average parsing parsing time time of of one one packet. packet. Figure Figure 14. (a) parsed packets per second; (b) average parsing time of one packet.

Then, the performance of maintaining the protocol bundle is evaluated, which is measuring the Then, the the performance performanceof ofmaintaining maintainingthe theprotocol protocol bundle is evaluated, which is measuring bundle is evaluated, which is measuring the execution time to start and update the protocol bundles of UDA as the number of protocol bundles the execution to start and update the protocol bundles as the number of protocol execution timetime to start and update the protocol bundles of UDAofasUDA the number of protocol bundles increases. As Figure 15 shows, when the number of bundles increases, the execution time of the two bundles shows, theofnumber bundles increases, the time execution increases.increases. As FigureAs 15Figure shows,15 when the when number bundlesofincreases, the execution of thetime two methods Start() and Update() seems to vary independently of the number of bundles. In addition, we of the two methods Start() and Update() seems to vary independently of the number of bundles. methods Start() and Update() seems to vary independently of the number bundles. In addition, we compared the response time for the OSGi framework against a native traditional Java code approach. In addition,the weresponse compared thefor response time for the OSGi framework a native Java compared time the OSGi framework against a native against traditional Java traditional code approach. Specifically, each device protocol bundle is defined as sequentially gathering data, and the data code approach. Specifically, each device protocol bundle is defined asgathering sequentially gathering data, Specifically, each device protocol bundle is defined as sequentially data, and the data retrieval latency for both approaches is set to 100 ms. and the data retrieval latency for bothisapproaches is set to 100 ms. retrieval latency for both approaches set to 100 ms.

Figure 15. Execution times with bundles variation. Figure 15. Execution times with bundles variation. Figure 15. Execution times with bundles variation.

Figure 16 shows the experimental results. The response time decreases drastically with OSGi Figure shows the experimental results. The response time decreases drastically with OSGi based UDA.16At the beginning, the delay is caused by establishing the network connection and shows the experimental results. The response time decreases drastically with OSGi basedFigure UDA.16At the beginning, the delay is caused by establishing the network connection and resolving dependencies for the device access framework. As time elapses, the framework stores the based UDA. At the beginning, thedevice delay access is caused by establishing theelapses, networkthe connection andstores resolving resolving dependencies for the framework. As time framework the data as its persistent state, and the invocation time istime significantly reduced. These resultsthe demonstrate dependencies for thestate, device access framework. Asis elapses, the framework data as its data as its persistent and the invocation time significantly reduced. Thesestores results demonstrate that the UDA is competent for time-critical application scenarios. persistent state, and the invocation time is significantly reduced. These results demonstrate that the that the UDA is competent for time-critical application scenarios. UDA is competent for time-critical application scenarios.

Sensors 2016, 16, 1596

18 of 21

Sensors 2016, 16, 1596

Sensors 2016, 16, 1596

18 of 21

18 of 21

Figure 16. The response time for OSGi framework against native traditional Java code approach.

Figure 16. The response time for OSGi framework against native traditional Java code approach. The16. proposed resource twoagainst patterns for service interaction: event-driven Figure The response timeplatform for OSGisupports framework native traditional Java code approach. (i.e.,

publish-subscribe) and traditional evaluated response times of these two The proposed resource platformrequest-response. supports two We patterns for the service interaction: event-driven patterns when concurrent sensing data are received from different underground devices. Specifically, (i.e., publish-subscribe) and platform traditional request-response. evaluated theevent-driven response times The proposed resource supports two patterns forWe service interaction: (i.e., of for patterns the convenience of receivingsensing the sensing data and immediately dispatching them to the these two when concurrent data are received from different underground devices. publish-subscribe) and traditional request-response. We evaluated the response times of these two applications, we simulated 500 sensing devices to send concurrent data. As shown in Figure 17, the patterns when concurrent sensing are received from different underground devices. Specifically, Specifically, for the convenience of data receiving the sensing data and immediately dispatching them to maximum processed transactions per second (TPS) of the traditional request-response pattern is 74 the convenience of receiving the sensing data and immediately dispatching them to the 17, the for applications, we simulated 500 sensing devices to send concurrent data. As shown in Figure and the average TPS is 64.42; the maximum TPS of the publish-subscribe pattern is 167.6 and the we issimulated 500 sensing devices send concurrent data. As shown in Figure the the applications, maximum processed second (TPS) ofthe themaximum traditional request-response average TPS 130.4.transactions In addition, asper shown intoFigure 18, service response time17, ofpattern maximum processed transactions per second (TPS) of the traditional request-response pattern is request-response is 150isms and the response is 134 ms; the maximum response of 74and is 74 and the average TPS 64.42; theaverage maximum TPStime of the publish-subscribe patterntime is 167.6 and the average TPS is 64.42; the maximum TPS of the publish-subscribe pattern is 167.6 and the of publish-subscribe is 42 and theas average response time18, is 30 The publish-subscribe patterntime the average TPS is 130.4. In ms addition, shown in Figure thems. maximum service response average TPS is 130.4. In addition, as shown in Figure 18, the maximum service response time of adopts an asynchronous multicast-like communication pattern, andms; canthe dispatch the published datatime request-response is 150 ms and the average response time is 134 maximum response of request-response is acknowledgment 150 ms and the average response Thus, time is ms; thecan maximum response from waiting an for the subscriber. the134 publisher quickly move on totime the of publish-subscribe is 42 ms and the average response time is 30 ms. The publish-subscribe pattern next receiver within deterministic time without synchronous operations. Therefore, pattern the publish-subscribe is 42 ms and the average responseany time is 30 ms. The publish-subscribe adopts data-centric an asynchronous multicast-like communication pattern, andpublish-subscribe can dispatch the published data IoT paradigm is more efficiently realized by the based data adopts an asynchronous multicast-like communication pattern, and can dispatch the published data from waiting an an acknowledgment Thus, the thepublisher publishercan can quickly move to the distribution pattern rather than for the traditional request-response pattern. from waiting acknowledgment forthe thesubscriber. subscriber. Thus, quickly move on on to the nextnext receiver within deterministic time without any synchronous operations. Therefore, the data-centric receiver within deterministic time without any synchronous operations. Therefore, the IoT data-centric paradigm isIoT more efficiently by the publish-subscribe based data distribution pattern paradigm is realized more efficiently realized by the publish-subscribe based data rather than the pattern traditional request-response pattern. distribution rather than the traditional request-response pattern.

Figure 17. Transactions processed per second of event-driven and request-response.

Figure 17. Transactions processed per second of event-driven and request-response.

Figure 17. Transactions processed per second of event-driven and request-response.

Sensors 2016, 16, 1596

19 of 21

Sensors 2016, 16, 1596 Sensors 2016, 16, 1596

19 of 21 19 of 21

Figure 18. Service response time of event-driven and request-response.

Figure 18. Service response time of event-driven and request-response. Figure 18. Service response time of event-driven and request-response. Figure 19 indicates the delay time of data uploading and retrieval. Cosm and Yeelink are well FigureWoT 19 indicates thethat delay of data uploading anddata retrieval. Cosmand andretrieval. Yeelink Local are well known platforms act time as data broker centers for uploading Figure 19 indicates the delay time of data uploading and retrieval. Cosm and Yeelink are well represents the local mode as discussed in Section 3. Remote P2P represents the Peer-to-Peer mode, known WoT platforms that act as data broker centers for data uploading and retrieval. Local represents known WoT platforms that act as data broker centers for data uploading and retrieval. Local dataasprovider publishes data3.to the remote LP in otherthe application domain through the the i.e., localthe mode discussed in Section Remote P2P represents Peer-to-Peer mode, i.e., the data represents the local mode as discussed in Section 3. Remote P2P represents the Peer-to-Peer mode, Internet. The datadata point theremote data cell contains a timestamp and a through value. Yeelink limits that the provider publishes toisthe LPthat in other application domain the Internet. The data i.e., the data provider publishes datarequests to the is remote LPCosm in other application domainDue through the minimum interval time between two 10 s, and has a similar restriction. to these point is the data cell that contains a timestamp and a value. Yeelink limits that the minimum interval Internet. The we data pointdo is the data celluploading that contains aretrieve timestamp and a value. Yeelink limits that the restrictions, continuous and operations. For cannot test time between twocannot requests is 10 s,two and Cosm has as,similar restriction. Dueexample, to thesewe restrictions, we minimum interval time between requests is 10 and Cosm has a similar restriction. Due to these the time of continuous uploading of 1000 data points, so we only test the time of a batch operation. cannot do continuous uploading and retrieve operations. For example, we cannot test the time restrictions, we cannot doofcontinuous and retrieve example, wemaximum cannot testof The maximum number data pointsuploading retrieved from Cosm inoperations. one query For is 1000, and the continuous uploading of 1000 data points, sodata we points, only test of a batch operation. Theoperation. maximum the time of continuous uploading of 1000 sothe wetime only the time of query a batch number uploads is 500. The maximum number of retrieving fromtest Yeelink in one is 400. We number of data points retrieved from Cosm in one query is 1000, and the maximum number uploads The maximum numberfor ofeach datadata points retrieved in one query is 1000, and the maximum repeat 10 frequencies point numberfrom and Cosm take their averages.

is 500. The uploads maximum number retrieving from Yeelink in onefrom queryYeelink is 400. in Weone repeat 10 isfrequencies number is 500. The of maximum number of retrieving query 400. We forrepeat each data point number anddata takepoint their number averages. 10 frequencies for each and take their averages.

(a)

(b)

Figure 19. (a) average uploading time; (b) average retrieval time.

(a) (b) As shown in Figure 19a, the uploading time of Cosm, Yeelink and our Remote mode are similar. Figure 19. (a) average uploading time; (b) average retrieval time. Figure 19.local (a) average time; (b) smaller. average retrieval time. time of Cosm and The uploading time of the mode isuploading stable and much The retrieval Yeelink is shown in Figure 19b. Figure 19b only shows the results in the case where the data they As shown in Figure 19a, the uploading time of Cosm, Yeelink and our Remote mode are similar. required are in already on the Cosm and Yeelink platforms. However, ifand the dataRemote are not on the platform Asuploading shown Figure the uploading time of Cosm, mode are similar. The time of 19a, the local mode is stable and much Yeelink smaller. Theour retrieval time of Cosm and The uploading time of the local mode is stable and much smaller. The retrieval time of Cosm and Yeelink is shown in Figure 19b. Figure 19b only shows the results in the case where the data they Yeelink is shown in Figure 19b only shows However, the resultsif in case the platform data they required are already on the19b. CosmFigure and Yeelink platforms. thethe data are where not on the

Sensors 2016, 16, 1596

20 of 21

required are already on the Cosm and Yeelink platforms. However, if the data are not on the platform (e.g., data are uploaded in pull mode instead of push mode), the time for getting the data will become longer (almost two times longer), it will bear twice the delay, i.e., from the remote data providers to Cosm and from Cosm to the consumers. However, using the P2P mode discussed in Section 3, the data are provided from the remote LPs to the consumers directly, and this will take less time. Thus, the performance of the local mode and the P2P mode both surpass Cosm and Yeelink. The results indicate that the interaction modes of this framework are more suitable for time-critical scenarios than existing commercial WoT platforms. 8. Conclusions This paper proposes a WoT resource framework which provides the infrastructure for the customizable openness and sharing of users’ data and resources under the premise of guaranteeing the real-time use of their own applications. Moreover, it is able to access large scale systems of heterogeneous and legacy devices to unlock these valuable resources. The proposed framework is validated by actual system models and experimental evaluations. We do not discuss in this paper the issues of security, and privilege-based access control. Some of these topics are the subject of our m are our ongoing work. Up to now, the framework only has one single public platform, so it may cause single point of failure problems. In our future work, we envision that public platforms will be federated and have a hierarchal structure like domain name servers, distributed in the different domains of the Internet. Acknowledgments: This work is supported by National Natural Science Foundation of China (Grant No. 61132001, 61501048); China Postdoctoral Science Foundation funded project (Grant No. 2016T90067, 2015M570060). Author Contributions: Shuai Zhao and Le Yu conceived and designed the experiments; Shuai Zhao performed the experiments; Le Yu and Bo Cheng analyzed the data; Bo Cheng contributed analysis tools; Shuai Zhao, Le Yu, and Bo Cheng wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3. 4.

5. 6. 7. 8.

9. 10.

Gupta, V.; Udupi, P.; Poursohi, A. Early lessons from building Sensor. Network: An open data exchange for the web of things. In Proceedings of the 2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Big Island, HI, USA, 29 March–2 April 2010; pp. 738–744. Guinard, D.; Trifa, V.; Wilde, E. A resource oriented architecture for the web of things. In Proceedings of the Internet of Things (IOT), Tokyo, Japan, 29 November–1 December 2010; pp. 1–8. Pfisterer, D.; Römer, K.; Bimschas, D.; Kleine, O.; Mietz, R.; Truong, C.; Hasemann, H.; Kröller, A.; Pagel, M.; Hauswirth, M. SPITFIRE: Toward a semantic web of things. IEEE Commun. Mag. 2011, 49, 40–48. [CrossRef] Bimschas, D.; Hasemann, H.; Hauswirth, M.; Karnstedt, M.; Kleine, O.; Kröller, A.; Leggieri, M.; Mietz, R.; Passant, A.; Pfisterer, D. Semantic-service provisioning for the Internet of Things. Electron. Commun. EASST 2011, 37, 1–12. Dillon, T.S.; Zhuge, H.; Wu, C.; Singh, J.; Chang, E. Web-of-things framework for cyber–physical systems. Concurr. Comput. Pract. Exp. 2011, 23, 905–923. [CrossRef] XIVELY IOT Platform. Available online: https://www.xively.com/xively-iot-platform (accessed on 26 September 2016). Santanche, A.; Nath, S.; Liu, J.; Priyantha, B.; Zhao, F. Senseweb: Browsing the Physical World in Real Time; Demo Abstract; ACM/IEEE IPSN06: Nashville, TN, USA, 19 April 2006; pp. 1–2. Aberer, K.; Hauswirth, M.; Salehi, A. Infrastructure for data processing in large-scale interconnected sensor networks. In Proceedings of the 2007 International Conference on Mobile Data Management, Mannheim, Germany, 7–11 May 2007; pp. 198–205. Yeelink. Available online: http://www.yeelink.net/ (accessed on 26 September 2016). Dickey, N.; Banks, D.; Sukittanon, S. Home Automation Using Cloud Network and Mobile Devices. In Proceedings of the 2012 Proceedings of IEEE Southeastcon, Orlando, FL, USA, 15–18 March 2012; pp. 1–4.

Sensors 2016, 16, 1596

11.

12. 13. 14. 15. 16. 17. 18.

19. 20.

21. 22. 23.

21 of 21

Kamilaris, A.; Iannarilli, N.; Trifa, V.; Pitsillides, A. Bridging the Mobile Web and the Web of Things in Urban Environments. In Proceedings of the Urban Internet of Things Workshop, at IOT 2010, Tokyo, Japan, 29 November 2010; pp. 1–6. Bilicki, V.; Rak, Z.; Kasza, M.; Végh, Á.; Béládi, R.; Gyimóthy, T. End User Programming in Smart Home. In Patterns, Programming and Everything; Springer: London, UK, 2012; pp. 19–29. Gyrard, A. An Architecture to Aggregate Heterogeneous and Semantic Sensed Data, Extended Semantic Web Conference; Springer: Berlin/Heidelberg, Germay, 23–26 May 2013; pp. 697–701. Presser, M.; Barnaghi, P.M.; Eurich, M.; Villalonga, C. The SENSEI project: Integrating the physical world with the digital world of the network of the future. IEEE Commun. Mag. 2009, 47, 1–4. [CrossRef] Bassi, A.; Bauer, M.; Fiedler, M.; Kramp, T.; Van, R.; Lange, S.; Meissner, S. Enabling Things to Talk-Designing IoT Solutions with the IoT Architectural Reference Model; Springer: New York, NY, USA, 2013; pp. 163–211. IOT-A Project Consortium and Others. Final Architectural Reference Model for the IoT; VDI/VDE Innovation+ Technik: Berlin, Germany, 2013. Horridge, M.; Bechhofer, S. The OWL API: A Java API for OWL Ontologies. Semant. Web 2011, 2, 11–21. Perera, C.; Zaslavsky, A.; Christen, P.; Compton, M.; Georgakopoulos, D. Context-aware sensor search, selection and ranking model for internet of things middleware. In Proceedings of the 2013 IEEE 14th International Conference on Mobile Data Management, Milan, Italy, 3–6 June 2013; pp. 314–322. Geer, D. Eclipse becomes the dominant Java IDE. Computer 2005, 38, 16–18. [CrossRef] Villalonga, C.; Bauer, M.; Aguilar, F.L.; Huang, V.A.; Strohbach, M. A Resource Model for the Real World Internet, European Conference on Smart Sensing and Context; Springer: Berlin/Heidelberg, Germay, 14–16 November 2010; pp. 163–176. Lefort, L. Ontology for Quantity Kinds and Units. Available online: https://www.w3.org/2005/Incubator/ ssn/ssnx/qu/qu-rec20.html (accessed on 7 July 2016). Climate and Forecast (CF) Features. Available online: https://www.w3.org/2005/Incubator/ssn/ssnx/cf/ cf-feature (accessed on 8 July 2016). Prateek, J.; Pascal, H.; Peter, Z.; Kunal, V.; Amit, S. Linked Data Is Merely More Data. In Proceedings of the AAAI Spring Symposium: Linked Data Meets Artificial Intelligence, Atlanta, GA, USA, 15 July 2010. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).