Technical Reports

AccessPress Lite is a HTML5 & CSS3 Responsive WordPress Business Theme with clean, minimal yet highly professional design. With our years of experience, we've developed this theme and given back to this awesome WordPress community.

Thursday, February 18, To investigate privacy requirements, we conducted an online survey with closed and open questions and collected valid responses.

What's New

Do you need to borrow over $5, but still have bad credit? There are still some options for you to explore - read on to find out more.

Yet most research in the area of code clone discovery deals with source code due to the complexity of finding clones in a dynamic environment. It then matches similar flows to find semantic code clones. With positive preliminary results indicating code clones using KAMINO, future tests will compare the its robustness compared to existing code clones detection tools.

Testing stateful applications is challenging, as it can be difficult to identify hidden dependencies on program state. These dependencies may manifest between several test cases, or simply within a single test case. When it's left to developers to document, understand, and respond to these dependencies, a mistake can result in unexpected and invalid test results.

Although current testing infrastructure does not currently leverage state dependency information, we argue that it could, and that by doing so testing can be improved. Our ongoing work is to apply similar analyses to improve existing state of the art test suite prioritization techniques and state of the art test case generation techniques. This work is advised by Professor Gail Kaiser. In correlation-based time-of-flight C-ToF imaging systems, light sources with temporally varying intensities illuminate the scene.

Due to global illumination, the temporally varying radiance received at the sensor is a combination of light received along multiple paths. Recovering scene properties e.

We propose phasor imaging, a framework for performing fast inverse light transport analysis using C-ToF sensors. Phasor imaging is based on the idea that by representing light transport quantities as phasors and light transport events as phasor transformations, light transport analysis can be simplified in the temporal frequency domain. We study the effect of temporal illumination frequencies on light transport, and show that for a broad range of scenes, global radiance multi-path interference vanishes for frequencies higher than a scene-dependent threshold.

We use this observation for developing two novel scene recovery techniques. First, we present Micro ToF imaging, a ToF based shape recovery technique that is robust to errors due to global illumination.

Second, we present a technique for separating the direct and global components of radiance. Both techniques require capturing as few as images and minimal computations. We demonstrate the validity of the presented techniques via simulations and experiments performed with our hardware prototype. Schur complement trick for positive semi-definite energies. The trick is especially useful for solving Lagrangian saddle point problems when minimizing quadratic energies subject to linear equality constraints [Gill et al.

I generalize this technique for positive semi-definite Hessians. Cloud computing offers a scalable, low-cost, and resilient platform for critical applications. Securing these applications against attacks targeting unknown vulnerabilities is an unsolved challenge.

Network anomaly detection addresses such zero-day attacks by modeling attributes of attack-free application traffic and raising alerts when new traffic deviates from this model. Content anomaly detection CAD is a variant of this approach that models the payloads of such traffic instead of higher level attributes.

Zero-day attacks then appear as outliers to properly trained CAD sensors. In the past, CAD was unsuited to cloud environments due to the relative overhead of content inspection and the dynamic routing of content paths to geographically diverse sites.

We challenge this notion and introduce new methods for efficiently aggregating content models to enable scalable CAD in dynamically-pathed environments such as the cloud. These methods eliminate the need to exchange raw content, drastically reduce network and CPU overhead, and offer varying levels of content privacy.

We perform a comparative analysis of our methods using Random Forest, Logistic Regression, and Bloom Filter-based classifiers for operation in the cloud or other distributed settings such as wireless sensor networks.

We find that content model aggregation offers statistically significant improvements over non-aggregate models with minimal overhead, and that distributed and non-distributed CAD have statistically indistinguishable performance. Thus, these methods enable the practical deployment of accurate CAD sensors in a distributed attack detection infrastructure. Vernam, Mauborgne, and Friedman: Examination of other documents suggests a different narrative.

It is most likely that Vernam came up with the need for non-repetition; Mauborgne, though, apparently contributed materially to the invention of the two-tape variant.

Furthermore, there is reason to suspect that he suggested the need for randomness to Vernam. Parker Hitt may have; William Friedman definitely did. Finally, we show that Friedman's attacks on the two-tape variant likely led to his invention of the index of coincidence, arguably the single most important publication in the history of cryptanalysis.

Exploring Societal Computing based on the Example of Privacy. Data privacy when using online systems like Facebook and Amazon has become an increasingly popular topic in the last few years. This thesis will consist of the following four projects that aim to address the issues of privacy and software engineering. First, only a little is known about how users and developers perceive privacy and which concrete measures would mitigate their privacy concerns.

To investigate privacy requirements, we conducted an online survey with closed and open questions and collected valid responses. Our results show that users often reduce privacy to security, with data sharing and data breaches being their biggest concerns. Users are more concerned about the content of their documents and their personal data such as location than about their interaction data.

Unlike users, developers clearly prefer technical measures like data anonymization and think that privacy laws and policies are less effective. We also observed interesting differences between people from different geographies.

For example, people from Europe are more concerned about data breaches than people from North America. Our results contribute to developing a user-driven privacy framework that is based on empirical evidence in addition to the legal, technical, and commercial perspectives. Second, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often.

As social network platforms have evolved, the ability for users to control how and with whom information is being shared introduces challenges concerning the configuration and comprehension of privacy settings. To address these concerns, our crowd sourced approach simplifies the understanding of privacy settings by using data collected from users over a 17 month period to generate visualizations that allow users to compare their personal settings to an arbitrary subset of individuals of their choosing.

To validate our approach we conducted an online survey with closed and open questions and collected 59 valid responses after which we conducted follow-up interviews with 10 respondents. We present a novel technique that can be used by end-users for detecting changes in privacy, i. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs.

We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective.

Fourth, approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate.

Finally, we generalize the problem of privacy and its tradeoffs. As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge.

Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales.

But clever technical solutions to those challenges may impose additional societal costs, e. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues.

We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole. We report the results from experiments on the convergence of the multimaterial mesh-based surface tracking method introduced by the same authors.

Under mesh refinement, approximately first order convergence or higher in L1 and L2 is shown for vertex positions, face normals and non-manifold junction curves in a number of scenarios involving the new operations proposed in the method.

Cyberwar is very much in the news these days. It is tempting to try to understand the economics of such an activity, if only qualitatively. What effort is required? What can such attacks accomplish? What does this say, if anything, about the likelihood of cyberwar? Internal Power Oversight for Applications. This paper introduces energy exchanges, a set of abstractions that allow applications to help hardware and operating systems manage power and energy consumption.

In particular, the abstractions offer audits and budgets which watch and cap the power or energy of some piece of the application. The interface also exposes energy and power usage reports which an application may use to change its behavior.

Such information complements existing system-wide energy management by operating systems or hardware, which provide global fairness and protections, but are unaware of the internal dynamics of an application. The library employs an accounting technique to attribute shares of system-wide energy consumption provided by system-wide hardware energy meters available on newer hardware platforms to individual application threads.

We use the library to demonstrate three applications of energy exchanges: Dynamic taint analysis is a well-known information flow analysis problem with many possible applications. Taint tracking allows for analysis of application data flow by assigning labels to inputs, and then propagating those labels through data flow.

Taint tracking systems traditionally compromise among performance, precision, accuracy, and portability. Performance can be critical, as these systems are typically intended to be deployed with software, and hence must have low overhead. To be deployed in security-conscious settings, taint tracking must also be accurate and precise.

Dynamic taint tracking must be portable in order to be easily deployed and adopted for real world purposes, without requiring recompilation of the operating system or language interpreter, and without requiring access to application source code.

We present Phosphor, a dynamic taint tracking system for the Java Virtual Machine JVM that simultaneously achieves our goals of performance, accuracy, precision, and portability. Moreover, to our knowledge, it is the first portable general purpose taint tracking system for the JVM. This paper describes the approach that Phosphor uses to achieve portable taint tracking in the JVM. Despite the variety of choices regarding hardware and software, to date a large number of computer systems remain identical.

One way to counter this problem is to diversify systems so that attackers cannot quickly and easily compromise a large number of machines. For instance, if each system has a different ISA, the attacker has to invest more time in developing exploits that run on every system manifestation. It is not that each individual attack gets harder, but the spread of malware slows down.

Further, if the diversified ISA is kept secret from the attacker, the bar for exploitation is raised even higher. We also describe how prac- tical development and deployment problems of diversified systems can be handled easily in the context of popular software distrbution models, such as the mobile app store model. Students traditionally learn microarchitecture by studying textual descriptions with diagrams but few analogies.

Several popular textbooks on this topic introduce concepts such as pipelining and caching in the context of simple paper-only architectures.

While this instructional style allows important concepts to be covered within a given class period, students have difficulty bridging the gap between what is covered in classes and real-world implementations. Discussing concrete implementations and complications would, however, take too much time. In this paper, we propose a technique of representing microarchitecture building blocks with animated metaphors to accelerate the process of learning about complex microarchitectures.

We represent hardware implementations as road networks that include specific patterns of traffic flow found in microarchitectural behavior. We believe the mental models developed by these students will serve them in remembering microarchitectural behavior and extend to learning new microarchitectures more easily. Recent advances in hardware security have led to the development of FANCI Functional Analysis for Nearly-Unused Circuit Identification , an analysis tool that identifies stealthy, malicious circuits within hardware designs that can perform malicious backdoor behavior.

Evaluations of such tools against benchmarks and academic attacks are not always equivalent to the dynamic attack scenarios that can arise in the real world. In the Embedded Systems Challenge ESC , teams from research groups from multiple continents created designs with malicious backdoors hidden in them as part of a red team effort to circumvent FANCI.

Notably, these backdoors were not placed into a priori known designs. The red team was allowed to create arbitrary, unspecified designs. Two interesting results came out of this effort. The first was that FANCI was surprisingly resilient to this wide variety of attacks and was not circumvented by any of the stealthy backdoors created by the red teams. The second result is that frequent-action backdoors, which are backdoors that are not made stealthy, were often successful.

These results emphasize the importance of combining FANCI with a reasonable degree of validation testing. The blue team efforts also exposed some aspects of the FANCI prototype that make analysis time-consuming in some cases, which motivates further development of the prototype in the future. Recent works have shown promise in using microarchitectural execution patterns to detect malware programs.

These detectors belong to a class of detectors known as signature-based detectors as they catch malware by comparing a program's execution pattern signature to execution patterns of known malware programs. In this work, we propose a new class of detectors - anomaly-based hardware malware detectors - that do not require signatures for malware detection, and thus can catch a wider range of malware including potentially novel ones.

We use unsupervised machine learning to build profiles of normal program execution based on data from performance counters, and use these profiles to detect significant deviations in program behavior that occur as a result of malware exploitation. We also examine the limits and challenges in implementing this approach in face of a sophisticated adversary attempting to evade anomaly-based detection.

The proposed detector is complementary to previously proposed signature-based detectors and can be used together to improve security. For the ever-demanding cellphone users, the exhaustive list of features that a smartphone supports just keeps getting more exhaustive with time.

Extrapolating into the future the features of a present-day smartphone, the lives of us humans using smartphones are going to be unimaginably agile. With the above said emphasis on the current and future potential of a smartphone, the ability to virtualize smartphones with all their real-world features into a virtual platform, is a boon for those who want to rigorously experiment and customize the virtualized smartphone hardware without spending an extra penny.

When accessible remotely with the real-time responsiveness, the above mentioned real-world behavior will be a real dealmaker in many real-world systems, namely, the life-saving systems like the ones that instantaneously get alerts about harmful magnetic radiations in the deep mining areas, etc. And these life-saving systems would be installed on a large scale on the desktops or large servers as virtualized smartphones having the added support of virtualized sensors which remotely fetch the real hardware sensor readings from a real smartphone in real-time.

Based on these readings the lives working in the affected areas can be alerted and thus saved by the people who are operating the at the desktops or large servers hosting the virtualized smartphones.

The current work of Sensor Emulation is quite unique when compared to the existing and past sensor-related works. The uniqueness comes from the full-fledged sensoremulation in a virtualized smartphone environment as opposed to building some sophisticated physical systems that usually aggregate the sensor readings from the real hardware sensors, might be in a remote manner and in real-time. For example, wireless sensor networks based remote-sensing systems that install real hardware sensors in remote places and have the sensor readings from all those sensors at a centralized server or something similar, for the necessary real-time or offline analysis.

In these systems, apart from collecting mere real hardware sensor readings into a centralized entity, nothing more is being achieved unlike in the current work of Sensor Emulation wherein the emulated sensors behave exactly like the remote real hardware sensors. The emulated sensors can be calibrated, speeded up or slowed down in terms of their sampling frequency , influence the sensor-based application running inside the virtualized smartphone environment exactly as the real hardware sensors of a real phone would do to the sensor-based application running in that real phone.

In essence, the current work is more about generalizing the sensors with all its real-world characteristics as far as possible in a virtualized platform than just a framework to send and receive sensor readings over the network between the real and virtual phones. Realizing the useful advantages of Sensor Emulation which is about adding virtualized sensors support to emulated environments, the current work emulates a total of ten sensors present in the real smartphone, Samsung Nexus S, an Android device.

Virtual phones run Android-x86 while real phones run Android. The real reason behind choosing Android-x86 for virtual phone is that xbased Android devices are feature-rich over ARM based ones, for example a full-fledged x86 desktop or a tablet has more features than a relatively small smartphone.

Out of the ten, five are real sensors and the rest are virtual or synthetic ones. The emulated Android-x86 is of Android release version Jelly Bean 4. One of the noteworthy aspects of the Sensor Emulation accomplished is being demand-less - exactly the same sensor-based Android applications will be able to use the sensors on the real and virtual phones, with absolutely no difference in terms of their sensor-based behavior. Apart from a Paired real-device scenario from which the real hardware sensor readings are fetched, the Sensor Emulation also is compatible with a Remote Server Scenario wherein the artificially generated sensor readings are fetched from a remote server.

Sensor Emulation once completed was evaluated for each of the emulated sensors using applications from Android Market as well as Amazon Appstore.

The applications category include both the basic sensor-test applications that show raw sensor readings, as well as the advanced 3D sensor-driven games which are emulator compatible, especially in terms of the graphics. The evaluations proved the current work of Sensor Emulation to be generic, efficient, robust, fast, accurate, and real.

As of this writing i. It is important to note that though the current work is targeted for Android-x86, the code written for the current work makes no assumptions about underlying platform to be an x86 one. Hence, the work is also logically seen as compatible with ARM based emulated Android environment though not actually tested.

Our measurements show that the video players frequently discard a large amount of video content although it is successfully delivered to a client. We first investigate the root cause of this unwanted behavior.

The architecture includes a selective packet discarding mechanism, which can be placed in packet data network gateways P-GW. In addition, our QoS-aware rules assist video players in selecting an appropriate resolution under a fluctuating channel condition.

We monitor network condition and configure QoS parameters to control availability of the maximum bandwidth in real time. In our experimental setup, the proposed platform shows up to We investigate video server selection algorithms in a distributed video-on-demand system. We proved that a location-aware video server selection algorithm assigns a video content server based on the network attachment point of a client.

We found out that such distance-based algorithms carry the risk of directing a client to a less optimal content server, although there may exist other better performing video delivery servers. In order to solve this problem, we propose to use dynamic network information such as packet loss rates and Round Trip Time RTT between an edge node of an wireless network e.

Our empirical study shows that the proposed architecture can provide higher TCP performance, leading to better viewing quality compared to location-based video server selection algorithms. However, it may converge only to a local optimum or may not converge at all. We explore an application to predicting equipment failure on an urban power network and demonstrate that the Bethe approximation can perform well even when BP fails to converge.

Introductory computer science courses traditionally focus on exposing students to basic programming and computer science theory, leaving little or no time to teach students about software testing. In the long term, they will appreciate the importance of testing as part of the software development life cycle. As voice, multimedia, and data services are converging to IP, there is a need for a new networking architecture to support future innovations and applications.

Such diverse network connectivity can be used to increase both reliability and performance by running applications over multiple links, sequentially for seamless user experience, or in parallel for bandwidth and performance enhancements. The existing networking stack, however, offers almost no support for intelligently exploiting such network, device, and location diversity. In this work, we survey recently proposed protocols and architectures that enable heterogeneous networking support.

Upon evaluation, we abstract common design patterns and propose a unified networking architecture that makes better use of a heterogeneous dynamic environment, both in terms of networks and devices. The architecture enables mobile nodes to make intelligent decisions about how and when to use each or a combination of networks, based on access policies.

With this new architecture, we envision a shift from current applications, which support a single network, location, and device at a time to applications that can support multiple networks, multiple locations, and multiple devices. To provide high performance at practical power levels, tomorrow's chips will have to consist primarily of application-specific logic that is only powered on when needed. This paper discusses synthesizing such logic from the functional language Haskell.

The proposed approach, which consists of rewriting steps that ultimately dismantle the source program into a simple dialect that enables a syntax-directed translation to hardware, enables aggressive parallelization and the synthesis of application-specific distributed memory systems. Transformations include scheduling arithmetic operations onto specific data paths, replacing recursion with iteration, and improving data locality by inlining recursive types.

A compiler based on these principles is under development. Social network platforms have transformed how people communicate and share information. However, as these platforms have evolved, the ability for users to control how and with whom information is being shared introduces challenges concerning the configuration and comprehension of privacy settings.

To validate our approach we conducted an online survey with closed and open questions and collected 50 valid responses after which we conducted follow-up interviews with 10 respondents.

However, only a little is known about how users and developers perceive privacy and which concrete measures would mitigate privacy concerns. Users are more concerned about the content of their documents and personal data such as location than their interaction data. Testing large software packages can become very time intensive. To address this problem, researchers have investigated techniques such as Test Suite Minimization.

Test Suite Minimization reduces the number of tests in a suite by removing tests that appear redundant, at the risk of a reduction in fault-finding ability since it can be difficult to identify which tests are truly redundant.

We take a completely different approach to solving the same problem of long running test suites by instead reducing the time needed to execute each test, an approach that we call Unit Test Virtualization. With Unit Test Virtualization, we reduce the overhead of isolating each unit test with a lightweight virtualization container.

We describe the empirical analysis that grounds our approach and provide an implementation of Unit Test Virtualization targeting Java applications. We also compared VMVM to a well known Test Suite Minimization technique, finding the reduction provided by VMVM to be four times greater, while still executing every test with no loss of fault-finding ability.

Challenges arise in testing applications that do not have test oracles, i. Metamorphic testing, introduced by Chen et al.

Here, we improve upon previous work by presenting a new technique called Metamorphic Runtime Checking, which automatically conducts metamorphic testing of both the entire application and individual functions during a program's execution. This new approach improves the scope, scale, and sensitivity of metamorphic testing by allowing for the identification of more properties and execution of more tests, and increasing the likelihood of detecting faults not found by application-level properties.

We previously reported our investigation of the fall offering of the Columbia University course COMS W Advanced Software Engineering, and here report on the fall offering and contrast it to the previous year.

Our main findings are: Sambuddho Chakravarty, Marco V. Low-latency anonymous communication networks, such as Tor, are geared towards web browsing, instant messaging, and other semi-interactive applications. To achieve acceptable quality of service, these systems attempt to preserve packet inter-arrival characteristics, such as inter-packet delay.

Consequently, a powerful adversary can mount traffic analysis attacks by observing similar traffic patterns at various points of the network, linking together otherwise unrelated network connections. Previous research has shown that having access to a few Internet exchange points is enough for monitoring a significant percentage of the network paths from Tor nodes to destination servers.

Although the capacity of current networks makes packet-level monitoring at such a scale quite challenging, adversaries could potentially use less accurate but readily available traffic monitoring functionality, such as Cisco's NetFlow, to mount large-scale traffic analysis attacks.

In this paper, we assess the feasibility and effectiveness of practical traffic analysis attacks against the Tor network using NetFlow data. We present an active traffic analysis method based on deliberately perturbing the characteristics of user traffic at the server side, and observing a similar perturbation at the client side through statistical correlation. We evaluate the accuracy of our method using both in-lab testing, as well as data gathered from a public Tor relay serving hundreds of users.

A Mobile Video Traffic Analysis: Video streaming on mobile devices is on the rise. According to recent reports, mobile video streaming traffic accounted for We analyzed the network traffic behaviors of the two most popular HTTP-based video streaming services: Our research indicates that the network traffic behavior depends on factors such as the type of device, multimedia applications in use and network conditions. Furthermore, we found that a large part of the downloaded video content can be unaccepted by a video player even though it is successfully delivered to a client.

This unwanted behavior often occurs when the video player changes the resolution in a fluctuating network condition and the playout buffer is full while downloading a video. Energy optimizations are being aggressively pursued today. Can these optimizations open up security vulnerabilities? In this invited talk at the Energy Secure System Architectures Workshop run by Pradip Bose from IBM Watson research center I discussed security implications of energy optimizations, capabilities of attackers, ease of exploitation, and potential payoff to the attacker.

I presented a mini tutorial on security for computer architects, and a personal research wish list for this emerging topic.

This paper presents a review of modern-day schlieren optics system and its application. Schlieren imaging systems provide a powerful technique to visualize changes or nonuniformities in refractive index of air or other transparent media. With the popularization of computational imaging techniques and widespread availability of digital imaging systems, schlieren systems provide novel methods of viewing transparent fluid dynamics. This paper presents a historical background of the technique, describes the methodology behind the system, presents a mathematical proof of schlieren fundamentals, and lists various recent applications and advancements in schlieren studies.

The increasing number of In addition, non-WiFi devices sharing the same spectrum with Although the problem sources can be easily removed in many cases, it is difficult for end users to identify the root cause. We introduce WiSlow, a software tool that diagnoses the root causes of poor WiFi performance with user-level network probes and leverages peer collaboration to identify the location of the causes.

We elaborate on two main methods: The Internet of Things IoT enables the physical world to be connected and controlled over the Internet. This paper presents a smart gateway platform that connects everyday objects such as lights, thermometers, and TVs over the Internet.

The proposed hardware architecture is implemented on an Arduino platform with a variety of off the shelf home automation technologies such as Zigbee and X Using the microcontroller-based platform, the SECE Sense Everything, Control Everything system allows users to create various IoT services such as monitoring sensors, controlling actuators, triggering action events, and periodic sensor reporting.

Mobile devices are vertically integrated systems that are powerful, useful platforms, but unfortunately limit user choice and lock users and developers into a particular mobile ecosystem, such as iOS or Android. We present Chameleon, a multi-persona binary compatibility architecture that allows mobile device users to run applications built for different mobile ecosystems together on the same smartphone or tablet.

Chameleon enhances the domestic operating system of a device with personas to mimic the application binary interface of a foreign operating system to run unmodified foreign binary applications.

To accomplish this without reimplementing the entire foreign operating system from scratch, Chameleon provides four key mechanisms. First, a multi-persona binary interface is used that can load and execute both domestic and foreign applications that use different sets of system calls.

Second, compile-time code adaptation makes it simple to reuse existing unmodified foreign kernel code in the domestic kernel. Third, API interposition and passport system calls make it possible to reuse foreign user code together with domestic kernel facilities to support foreign kernel functionality in user space.

Fourth, schizophrenic processes allow foreign applications to use domestic libraries to access proprietary software and hardware interfaces on the device. We have built a Chameleon prototype and demonstrate that it imposes only modest performance overhead and can run iOS applications from the Apple App Store together with Android applications from Google Play on a Nexus 7 tablet running the latest version of Android.

We provide the first measurements on real hardware of a complete hypervisor using ARM hardware virtualization support. System reliability is a critical requirement of cyber-physical systems. An unreliable CPS often leads to system malfunctions, service disruptions, financial losses and even human life. Some prior researches have proposed reliability benchmark for some specific CPS such as wind power plant and wireless sensor networks. There were also some prior researches on the components of CPS such as software and some specific hardware.

Trusted on over 2 Million websites. Statcounter has been our favourite tool for measuring and optimizing growth. We love it because it gives us the most complete picture of how people use FriendMatch. I have used Statcounter.

I wouldn't have a website without it. It's really easy to see the data and make decisions from it without having to be an expert. With Statcounter, we don't have to dig for what we want to see, and that has enabled us to react quickly to problems and take advantage of opportunities we may have otherwise missed.

One of the biggest advantages we fell in love with at the very beginning, is that we can see large amount of data in real time and that helps us to make quicker decisions. Oregon Health Plan OHP members must update their information to make sure they still qualify for health coverage. Members are usually asked to renew their information once a year, depending on their eligibility. Your browser is out-of-date!

It has known security flaws and may not display all features of this and other websites. Skip to main content. Full Width Column 1. Find information about eligibility, fast-track enrollment and where to find application assistance.