Fig 4 - uploaded by Mohammed Hassan
Content may be subject to copyright.
Method Selection Graph 

Method Selection Graph 

Source publication
Conference Paper
Full-text available
As mobile devices are battery powered and have less computing resources, plenty of research has been conducted on how to efficiently offload computing-intensive tasks in a mobile application to more powerful counterpart. However, prior research either implicitly assumes that the computing-intensive tasks are known in advance or the application deve...

Contexts in source publication

Context 1
... partitioning and training. Table II We With adopt Algorithm further two learning 6, shows we models execute the classification to the predict application the overhead response locally of time if a single for there both is experiment no local method on-device and to offload. the and one remote If any time on-server method training is executions offloaded, overhead in we for a different dynamically monitor the models. response changing Note time environment that while although it where is DT being bandwidth, is also executed lightweight latency, on the with data server size, moderate side. server’s These accuracy, available values we are CPU exclude used and it for memory first training since change. it the cannot learning Algo- ex- trapolate rithm models 6 for presents its the decision on-server the pseudo and response suffers code from for time. this over-fitting. We dynamic also update application Both the MLP on- and partitioning device SVM learning perform and model training. well in in a similar terms of fashion accuracy when and the prediction methods time, (and application) can capture are the executed relationship locally. between multiple features, and support online training [19]. SVM is easy to train and free from over-fitting. Although MLP takes a bit more time for training [15], it supports on-line learning and works well in noisy environments, and the amortized cost for one experiment is low. Thus Elicit can use either of them, and we will evaluate the performance of both models later. With Algorithm 6, we execute the application locally if there is no method to offload. If any method is offloaded, we monitor the response time while it is being executed on the server side. These values are used for training the learning models for the on-server response time. We also update the on- device learning model in a similar fashion when the methods (and application) are executed locally. In this subsection, we illustrate our algorithm with an example. Figure 3 shows a method invocation diagram of an application. Here each white node represents a method and the solid arrow represents a method invocation incident from another method. The dark node represents a variable. The dotted arrow represents an access to a variable (global or class) modified by other methods. Each arrow has some associated cost with it. For example, when method A invokes another method B , A sends the parameters to that callee method B . When we offload the callee method, these parameters have to be sent to the server as well, which demands time to send it over the networks, which is denoted by e AB . This cost can be found by monitoring the time when these parameters are sent to the server side from the mobile device. Similarly, if an offloaded method I accesses a global or class variable V modified by other method B , this variable has to be sent over the network so that the clone method on the server side can access these variables and execute flawlessly. We denote this cost as e V I . Suppose in this graph we find that method C and G consume most of the resources. If we offload C , methods D , E , and F are executing on the server side as well. Method F is accessing a camera, as a result it is not possible to execute method F to the server side. If we still decide to offload method C , the execution has to be transferred to the mobile device again while F is accessing the camera. To limit this back and forth executions, we should not offload C . To identify such constrained methods, our Algorithm 3 finds the list of classes, M CC of the reachable methods in RM of method C , which is { D , E , F } . As we have found that M CC of C includes a class accessing camera (from method F ), we should not offload C . Similarly, by offloading G , we are also executing H and I on the server side. So when method I is executed on the server side, it has to access variable V . As a result, while offloading method G , we have to send variable V to the server, which will cost us e IV . In order to find such a set of variables, we deduce RV of method G . This set RV of method G has to be offloaded to the server side in order to offload the method G to the server. Note that here if G ’s V CC (the classes of RV ) includes any of the constrained classes C , we do not offload G as well. In addition, if we offload a method whose RV or RM is updated in parallel from other methods executing on the mobile device, we have to communicate again from the mobile device to the server. To minimize such overhead and inconsistency, we do not allow these methods to be offloaded. As illustrated in Figure 3, if any of method A, C, D, E, F or B is accessing the RM or RV of G , we do not offload G . Thus, we find the Method Call Graph of B , G , H , and I and convert it to Method Selection Graph shown in Figure 4. In this graph, we introduce one additional node m ′ for each of the methods m . So for each of B , G , H , and I ; we add B ′ , G ′ , H ′ , and I ′ in Figure 4. We introduce two additional nodes λ and μ . The capacity of the edges between λ and B , G , H , I is set to their corresponding on-device execution time. The capacity of the edges between B ′ , G ′ , H ′ , I ′ and μ is set to the corresponding on-server execution time. The capacity C between any other two nodes is set to the summation of the ranks of the methods in this graph ( C = ( B m + G m + H m + I m + D m ) + ( B s + G s + H s + I s + D s ) ). Figure 4 shows the corresponding Method Selection Graph . The dash curve line shows a min-cut partition where we are offloading the methods G , H , and I . So far we have discussed our solution for optimizing the response time. The same approach can be utilized to optimize the energy consumption. IV. I MPLEMENTATION We implement a prototype of Elicit in Dalvik VM for Android applications. We have modified the Cyanogenmod [2] open source distribution to profile the applications without modifying the application to find the methods’ members and the method call graph. After profiling, we partition the application optimally to achieve the highest gain regarding the system environment. Finally, we offload the method transparently to the server without modifying any source code or binary of the application. To profile applications, we have modified the instructions for a method invocation and method return. In Dalvik, whenever a method is invoked, it is translated to a method invocation instruction. The caller method’s program counter and frame page are saved and then the callee method starts its execution. When the callee method finishes its execution, it returns to the caller method. The return instruction saves the return value to the caller method’s return address and starts the caller method by retrieving the program counter and the frame pointer of the caller method. We have modified the method structure of the Dalvik VM so that it keeps records when a method is invoked and when it returns to the invoking method. From these two timestamps, we keep track of the execution time of the methods. We use PowerTutor [7] to measure the methods’ energy consumption. We build a tool based on JAVA to analyze the bytecode of the applications to construct the method call graph and thus find the list of variables and other methods accessed by the methods of an application. This one time analysis takes around 30-40 seconds on an average for each of the applications. We deduce the byte code from the source code by javap command to disassemble the class files. Note we can also get the byte code from the applications [20] whenever the source code is not available. To construct the method call graph, we analyze the byte code of each method and look for invoke or invokevirtual instructions to find the list of callee methods from a given caller method. In this way, we find the list of directly invoked methods and thus the list of reachable methods RM from Algorithm 1. From bytecode analysis, we also find the parent class of each method, and thus find M CC (Method Closure Class) from Algorithm 3. Furthermore, whenever a method accesses a variable, it leverages two separate instructions (IGET and IPUT namely) to retrieve and save values. We also analyze the IGET and IPUT instructions to keep track of the variables (and their parent classes) that are accessed by the methods. To offload these methods, we have to send these parameters and variables (with IGET tag) to the server side. Once the method has successfully finished its execution and returns back to the mobile device, we save the return value and synchronize the modified variables (the variable having IPUT tag). Here if a method is accessing a variable related to mobile device’s I/O, camera, or similar sensors, we do not offload it. We do this by examining the variables’ parent class which is fetched from the byte code text. In this way, we obtain the parameters for Algorithm 5. These parameters include the list of variables (and their classes) accessed by methods, the method call graph, methods’ execution time, methods’ parent classes, etc. To predict the on- device and on-server execution time by our learning model, we run the applications in the mobile device and offload them in different environments to the server for initial profiling. Then we conduct analysis to find the optimal partition of an application according to Algorithm 6. As discussed in section III-A, we have to discard the methods that access mobile device equipments (not exclusively, camera, sensor, view, etc.). To find this list, we populate a list from Android Java definitions. Based on the list of methods, their parent class, and the global and class variables (and parent class of these variables), we discard these methods (and their callers) from offloading. We thus find the list of methods to offload along with the variables’ states that must be synchronized before and after offloading. We intercept those methods’ invocations and offload them ...
Context 2
... that while although it where is DT being bandwidth, is also executed lightweight latency, on the with data server size, moderate side. server’s These accuracy, available values we are CPU exclude used and it for memory first training since change. it the cannot learning Algo- ex- trapolate rithm models 6 for presents its the decision on-server the pseudo and response suffers code from for time. this over-fitting. We dynamic also update application Both the MLP on- and partitioning device SVM learning perform and model training. well in in a similar terms of fashion accuracy when and the prediction methods time, (and application) can capture are the executed relationship locally. between multiple features, and support online training [19]. SVM is easy to train and free from over-fitting. Although MLP takes a bit more time for training [15], it supports on-line learning and works well in noisy environments, and the amortized cost for one experiment is low. Thus Elicit can use either of them, and we will evaluate the performance of both models later. With Algorithm 6, we execute the application locally if there is no method to offload. If any method is offloaded, we monitor the response time while it is being executed on the server side. These values are used for training the learning models for the on-server response time. We also update the on- device learning model in a similar fashion when the methods (and application) are executed locally. In this subsection, we illustrate our algorithm with an example. Figure 3 shows a method invocation diagram of an application. Here each white node represents a method and the solid arrow represents a method invocation incident from another method. The dark node represents a variable. The dotted arrow represents an access to a variable (global or class) modified by other methods. Each arrow has some associated cost with it. For example, when method A invokes another method B , A sends the parameters to that callee method B . When we offload the callee method, these parameters have to be sent to the server as well, which demands time to send it over the networks, which is denoted by e AB . This cost can be found by monitoring the time when these parameters are sent to the server side from the mobile device. Similarly, if an offloaded method I accesses a global or class variable V modified by other method B , this variable has to be sent over the network so that the clone method on the server side can access these variables and execute flawlessly. We denote this cost as e V I . Suppose in this graph we find that method C and G consume most of the resources. If we offload C , methods D , E , and F are executing on the server side as well. Method F is accessing a camera, as a result it is not possible to execute method F to the server side. If we still decide to offload method C , the execution has to be transferred to the mobile device again while F is accessing the camera. To limit this back and forth executions, we should not offload C . To identify such constrained methods, our Algorithm 3 finds the list of classes, M CC of the reachable methods in RM of method C , which is { D , E , F } . As we have found that M CC of C includes a class accessing camera (from method F ), we should not offload C . Similarly, by offloading G , we are also executing H and I on the server side. So when method I is executed on the server side, it has to access variable V . As a result, while offloading method G , we have to send variable V to the server, which will cost us e IV . In order to find such a set of variables, we deduce RV of method G . This set RV of method G has to be offloaded to the server side in order to offload the method G to the server. Note that here if G ’s V CC (the classes of RV ) includes any of the constrained classes C , we do not offload G as well. In addition, if we offload a method whose RV or RM is updated in parallel from other methods executing on the mobile device, we have to communicate again from the mobile device to the server. To minimize such overhead and inconsistency, we do not allow these methods to be offloaded. As illustrated in Figure 3, if any of method A, C, D, E, F or B is accessing the RM or RV of G , we do not offload G . Thus, we find the Method Call Graph of B , G , H , and I and convert it to Method Selection Graph shown in Figure 4. In this graph, we introduce one additional node m ′ for each of the methods m . So for each of B , G , H , and I ; we add B ′ , G ′ , H ′ , and I ′ in Figure 4. We introduce two additional nodes λ and μ . The capacity of the edges between λ and B , G , H , I is set to their corresponding on-device execution time. The capacity of the edges between B ′ , G ′ , H ′ , I ′ and μ is set to the corresponding on-server execution time. The capacity C between any other two nodes is set to the summation of the ranks of the methods in this graph ( C = ( B m + G m + H m + I m + D m ) + ( B s + G s + H s + I s + D s ) ). Figure 4 shows the corresponding Method Selection Graph . The dash curve line shows a min-cut partition where we are offloading the methods G , H , and I . So far we have discussed our solution for optimizing the response time. The same approach can be utilized to optimize the energy consumption. IV. I MPLEMENTATION We implement a prototype of Elicit in Dalvik VM for Android applications. We have modified the Cyanogenmod [2] open source distribution to profile the applications without modifying the application to find the methods’ members and the method call graph. After profiling, we partition the application optimally to achieve the highest gain regarding the system environment. Finally, we offload the method transparently to the server without modifying any source code or binary of the application. To profile applications, we have modified the instructions for a method invocation and method return. In Dalvik, whenever a method is invoked, it is translated to a method invocation instruction. The caller method’s program counter and frame page are saved and then the callee method starts its execution. When the callee method finishes its execution, it returns to the caller method. The return instruction saves the return value to the caller method’s return address and starts the caller method by retrieving the program counter and the frame pointer of the caller method. We have modified the method structure of the Dalvik VM so that it keeps records when a method is invoked and when it returns to the invoking method. From these two timestamps, we keep track of the execution time of the methods. We use PowerTutor [7] to measure the methods’ energy consumption. We build a tool based on JAVA to analyze the bytecode of the applications to construct the method call graph and thus find the list of variables and other methods accessed by the methods of an application. This one time analysis takes around 30-40 seconds on an average for each of the applications. We deduce the byte code from the source code by javap command to disassemble the class files. Note we can also get the byte code from the applications [20] whenever the source code is not available. To construct the method call graph, we analyze the byte code of each method and look for invoke or invokevirtual instructions to find the list of callee methods from a given caller method. In this way, we find the list of directly invoked methods and thus the list of reachable methods RM from Algorithm 1. From bytecode analysis, we also find the parent class of each method, and thus find M CC (Method Closure Class) from Algorithm 3. Furthermore, whenever a method accesses a variable, it leverages two separate instructions (IGET and IPUT namely) to retrieve and save values. We also analyze the IGET and IPUT instructions to keep track of the variables (and their parent classes) that are accessed by the methods. To offload these methods, we have to send these parameters and variables (with IGET tag) to the server side. Once the method has successfully finished its execution and returns back to the mobile device, we save the return value and synchronize the modified variables (the variable having IPUT tag). Here if a method is accessing a variable related to mobile device’s I/O, camera, or similar sensors, we do not offload it. We do this by examining the variables’ parent class which is fetched from the byte code text. In this way, we obtain the parameters for Algorithm 5. These parameters include the list of variables (and their classes) accessed by methods, the method call graph, methods’ execution time, methods’ parent classes, etc. To predict the on- device and on-server execution time by our learning model, we run the applications in the mobile device and offload them in different environments to the server for initial profiling. Then we conduct analysis to find the optimal partition of an application according to Algorithm 6. As discussed in section III-A, we have to discard the methods that access mobile device equipments (not exclusively, camera, sensor, view, etc.). To find this list, we populate a list from Android Java definitions. Based on the list of methods, their parent class, and the global and class variables (and parent class of these variables), we discard these methods (and their callers) from offloading. We thus find the list of methods to offload along with the variables’ states that must be synchronized before and after offloading. We intercept those methods’ invocations and offload them accordingly as described in the next subsection. We keep the offloading mechanism transparent to the applications by adopting the transparent mechanism as proposed in POMAC [15]. Following the same principles, we trap the method invocation instruction and gets the parameters and variables required by the method to execute on the server side. POMAC [15] can offload the methods which do not access any class or global variable. POMAC only retrieves the methods’ ...
Context 3
... environment where bandwidth, latency, data size, server’s available CPU and memory change. Algorithm 6 presents the pseudo code for this dynamic application partitioning and training. Table II We With adopt Algorithm further two learning 6, shows we models execute the classification to the predict application the overhead response locally of time if a single for there both is experiment no local method on-device and to offload. the and one remote If any time on-server method training is executions offloaded, overhead in we for a different dynamically monitor the models. response changing Note time environment that while although it where is DT being bandwidth, is also executed lightweight latency, on the with data server size, moderate side. server’s These accuracy, available values we are CPU exclude used and it for memory first training since change. it the cannot learning Algo- ex- trapolate rithm models 6 for presents its the decision on-server the pseudo and response suffers code from for time. this over-fitting. We dynamic also update application Both the MLP on- and partitioning device SVM learning perform and model training. well in in a similar terms of fashion accuracy when and the prediction methods time, (and application) can capture are the executed relationship locally. between multiple features, and support online training [19]. SVM is easy to train and free from over-fitting. Although MLP takes a bit more time for training [15], it supports on-line learning and works well in noisy environments, and the amortized cost for one experiment is low. Thus Elicit can use either of them, and we will evaluate the performance of both models later. With Algorithm 6, we execute the application locally if there is no method to offload. If any method is offloaded, we monitor the response time while it is being executed on the server side. These values are used for training the learning models for the on-server response time. We also update the on- device learning model in a similar fashion when the methods (and application) are executed locally. In this subsection, we illustrate our algorithm with an example. Figure 3 shows a method invocation diagram of an application. Here each white node represents a method and the solid arrow represents a method invocation incident from another method. The dark node represents a variable. The dotted arrow represents an access to a variable (global or class) modified by other methods. Each arrow has some associated cost with it. For example, when method A invokes another method B , A sends the parameters to that callee method B . When we offload the callee method, these parameters have to be sent to the server as well, which demands time to send it over the networks, which is denoted by e AB . This cost can be found by monitoring the time when these parameters are sent to the server side from the mobile device. Similarly, if an offloaded method I accesses a global or class variable V modified by other method B , this variable has to be sent over the network so that the clone method on the server side can access these variables and execute flawlessly. We denote this cost as e V I . Suppose in this graph we find that method C and G consume most of the resources. If we offload C , methods D , E , and F are executing on the server side as well. Method F is accessing a camera, as a result it is not possible to execute method F to the server side. If we still decide to offload method C , the execution has to be transferred to the mobile device again while F is accessing the camera. To limit this back and forth executions, we should not offload C . To identify such constrained methods, our Algorithm 3 finds the list of classes, M CC of the reachable methods in RM of method C , which is { D , E , F } . As we have found that M CC of C includes a class accessing camera (from method F ), we should not offload C . Similarly, by offloading G , we are also executing H and I on the server side. So when method I is executed on the server side, it has to access variable V . As a result, while offloading method G , we have to send variable V to the server, which will cost us e IV . In order to find such a set of variables, we deduce RV of method G . This set RV of method G has to be offloaded to the server side in order to offload the method G to the server. Note that here if G ’s V CC (the classes of RV ) includes any of the constrained classes C , we do not offload G as well. In addition, if we offload a method whose RV or RM is updated in parallel from other methods executing on the mobile device, we have to communicate again from the mobile device to the server. To minimize such overhead and inconsistency, we do not allow these methods to be offloaded. As illustrated in Figure 3, if any of method A, C, D, E, F or B is accessing the RM or RV of G , we do not offload G . Thus, we find the Method Call Graph of B , G , H , and I and convert it to Method Selection Graph shown in Figure 4. In this graph, we introduce one additional node m ′ for each of the methods m . So for each of B , G , H , and I ; we add B ′ , G ′ , H ′ , and I ′ in Figure 4. We introduce two additional nodes λ and μ . The capacity of the edges between λ and B , G , H , I is set to their corresponding on-device execution time. The capacity of the edges between B ′ , G ′ , H ′ , I ′ and μ is set to the corresponding on-server execution time. The capacity C between any other two nodes is set to the summation of the ranks of the methods in this graph ( C = ( B m + G m + H m + I m + D m ) + ( B s + G s + H s + I s + D s ) ). Figure 4 shows the corresponding Method Selection Graph . The dash curve line shows a min-cut partition where we are offloading the methods G , H , and I . So far we have discussed our solution for optimizing the response time. The same approach can be utilized to optimize the energy consumption. IV. I MPLEMENTATION We implement a prototype of Elicit in Dalvik VM for Android applications. We have modified the Cyanogenmod [2] open source distribution to profile the applications without modifying the application to find the methods’ members and the method call graph. After profiling, we partition the application optimally to achieve the highest gain regarding the system environment. Finally, we offload the method transparently to the server without modifying any source code or binary of the application. To profile applications, we have modified the instructions for a method invocation and method return. In Dalvik, whenever a method is invoked, it is translated to a method invocation instruction. The caller method’s program counter and frame page are saved and then the callee method starts its execution. When the callee method finishes its execution, it returns to the caller method. The return instruction saves the return value to the caller method’s return address and starts the caller method by retrieving the program counter and the frame pointer of the caller method. We have modified the method structure of the Dalvik VM so that it keeps records when a method is invoked and when it returns to the invoking method. From these two timestamps, we keep track of the execution time of the methods. We use PowerTutor [7] to measure the methods’ energy consumption. We build a tool based on JAVA to analyze the bytecode of the applications to construct the method call graph and thus find the list of variables and other methods accessed by the methods of an application. This one time analysis takes around 30-40 seconds on an average for each of the applications. We deduce the byte code from the source code by javap command to disassemble the class files. Note we can also get the byte code from the applications [20] whenever the source code is not available. To construct the method call graph, we analyze the byte code of each method and look for invoke or invokevirtual instructions to find the list of callee methods from a given caller method. In this way, we find the list of directly invoked methods and thus the list of reachable methods RM from Algorithm 1. From bytecode analysis, we also find the parent class of each method, and thus find M CC (Method Closure Class) from Algorithm 3. Furthermore, whenever a method accesses a variable, it leverages two separate instructions (IGET and IPUT namely) to retrieve and save values. We also analyze the IGET and IPUT instructions to keep track of the variables (and their parent classes) that are accessed by the methods. To offload these methods, we have to send these parameters and variables (with IGET tag) to the server side. Once the method has successfully finished its execution and returns back to the mobile device, we save the return value and synchronize the modified variables (the variable having IPUT tag). Here if a method is accessing a variable related to mobile device’s I/O, camera, or similar sensors, we do not offload it. We do this by examining the variables’ parent class which is fetched from the byte code text. In this way, we obtain the parameters for Algorithm 5. These parameters include the list of variables (and their classes) accessed by methods, the method call graph, methods’ execution time, methods’ parent classes, etc. To predict the on- device and on-server execution time by our learning model, we run the applications in the mobile device and offload them in different environments to the server for initial profiling. Then we conduct analysis to find the optimal partition of an application according to Algorithm 6. As discussed in section III-A, we have to discard the methods that access mobile device equipments (not exclusively, camera, sensor, view, etc.). To find this list, we populate a list from Android Java definitions. Based on the list of methods, their parent class, and the global and class variables (and parent class of these variables), we discard these methods (and their callers) from offloading. We thus find the list of ...

Similar publications

Article
Full-text available
Statistics demonstrate that Android is the mostly used operating system on mobile phones and tablets across the world. These mobile devices operate using batteries which have limited size and capacity. Therefore, energy management when running mobile applications become of vital importance. In this paper, an energy-efficient android application for...
Article
Full-text available
Studies related to resource consumption of mobile devices and mobile applications have been brought to the fore lately as mobile applications depend largely on their resource consumption. The study aims to identify the key factors and holistic understanding of how a factor influences Consumption Pattern (CP) effectiveness for an android platform mo...

Citations

... Ondemand data transfer strategies are useful for mobile crowd sensing applications. The opportunistic data transfer strategies monitor the connected devices and systems and find the feasible environment for pushing or pulling data streams among connected devices and systems (Hassan et al., 2015). Smart data reduction is another approach for data transfer where mobile devices perform the data stream mining operations and the results are communicated only if there is a significant change in the data stream (Jayaraman, Gomes, et al., 2014 ...
Thesis
Mobile edge cloud computing (MECC) systems extend computational, networking, and storage capabilities of centralized cloud computing systems through edge servers at one-hop wireless distances from mobile devices. Mobile data stream mining (MDSM) applications in MECC systems involve massive heterogeneity at application and platform levels. At application level, the program components need to handle continuously streaming data in order to perform knowledge discovery operations. At platform level, the MDSM applications need to seamlessly switch the execution processes among mobile devices, edge servers, and cloud computing servers. However, the execution of MDSM applications in MECC systems becomes hard due to multiple factors. The critical factors of complexity at application level include data size and data rate of continuously streaming data, the selection of data fusion and data preprocessing methods, the choice of learning models, learning rates and learning modes, and the adoption of data mining algorithms. Alternately, the platform level complexity increases due to mobility and limited availability of computational and battery power resources in mobile devices, high coupling between application components, and dependency over Internet connections. Considering the complexity factors, existing literature proposes static execution models for MDSM applications. The execution models are based on either standalone mobile devices, mobile-to-mobile, mobile-to-edge, or mobile-to-cloud communication models. This thesis presents the novel architecture which utilizes far-edge mobile devices as a primary execution platform for MDSM applications. At the secondary level, the architecture executes MDSM applications by enabling direct communication among nearer mobile devices through localWi-Fi routers without connecting to the Internet. At tertiary level, the architecture enables far-edge to cloud communication in case of unavailability of onboard computational and battery power resources and in the absence of any other mobile devices in the locality. This thesis also presents the dynamic and adaptive execution models in order to handle the complexity at application and platform levels. The dynamic execution model facilitates the data-intensive MDSM applications having low computational complexity. However, the adaptive execution model facilitates in seamless execution of MDSM applications having low data-intensity but high computational complexities. Multiple evaluation methods were used in order to verify and validate the performance of proposed architecture and execution models. The validation and verification of the proposed architecture were performed using High-Level Petri Nets (HLPN) and Z3 Solver. The simulation results revealed that all states in the HLPN model were reachable and the overall design presented a workable solution. However, proposed architecture faced the state explosion problem wherein conventional static execution models fail because the system may enter in multiple states of execution from a single state. The proposed dynamic and adaptive execution models help address the issue of the state explosion problem. To this end, the proposed execution models were tested with multiple MDSM applications mapping to a real-world use-case for activity detection using MECC systems. The experimental evaluation was made in terms of battery power consumption, memory utilization, makespan, accuracy, and the amount of data reduced in mobile devices. The comparison showed that proposed dynamic and adaptive execution models outperformed the static execution models in multiple aspects.
... Push-based Pull-based On-demand Opportunistic Smart Data Reduction Yes N Yes N Yes Lin et al. (2013) Yes N N N N Hassan et al. (2015) Yes N N Yes N Talia and Trunfio (2010) Yes N N N N Yoon (2013) Yes H.u. Rehman et al. Journal of Network and Computer Applications xxx (2016) xxx-xxx data sources and input either directly into light-weight data mining algorithms or redirects through adaptation engine if OMM is operating in adaptive mode, (c) resource monitor component tracks the memory, CPU, and battery power in mobile devices for seamless execution and adaptations, (d) a library of light-weight data stream mining algorithms, (e) a library to enable visualization facilities in mobile devices, and (f) adaptation engine to execute resource and situation aware adaptation strategies. ...
Article
Full-text available
The convergence of Internet of Things (IoTs), mobile computing, cloud computing, edge computing and big data has brought a paradigm shift in computing technologies. New computing systems, application models, and application areas are emerging to handle the massive growth of streaming data in mobile environments such as smartphones, IoTs, body sensor networks, and wearable devices, to name a few. However, the challenge arises about how and where to process the data streams in order to perform analytic operations and uncover useful knowledge patterns. The mobile data stream mining (MDSM) applications involve a number of operations for, 1) data acquisition from heterogeneous data sources, 2) data preprocessing, 3) data fusion, 4) data mining, and 5) knowledge management. This article presents a thorough review of execution platforms for MDSM applications. In addition, a detailed taxonomic discussion of heterogeneous MDSM applications is presented. Moreover, the article presents detailed literature review of methods that are used to handle heterogeneity at application and platform levels. Finally, the gap analysis is articulated and future research directions are presented to develop next-generation MDSM applications.
... Ondemand data transfer strategies are useful for mobile crowd sensing applications. The opportunistic data transfer strategies monitor the connected devices and systems and find the feasible environment for pushing or pulling data streams among connected devices and systems (Hassan et al., 2015). Smart data reduction is another approach for data transfer where mobile devices perform the data stream mining operations and the results are communicated only if there is a significant change in the data stream (Jayaraman, Gomes, et al., 2014 ...
Thesis
Mobile edge cloud computing (MECC) systems extend computational, networking, and storage capabilities of centralized cloud computing systems through edge servers at one-hop wireless distances from mobile devices. Mobile data stream mining (MDSM) applications in MECC systems involve massive heterogeneity at application and platform levels. At application level, the program components need to handle continuously streaming data in order to perform knowledge discovery operations. At platform level, the MDSM applications need to seamlessly switch the execution processes among mobile devices, edge servers, and cloud computing servers. However, the execution of MDSM applications in MECC systems becomes hard due to multiple factors. The critical factors of complexity at application level include data size and data rate of continuously streaming data, the selection of data fusion and data preprocessing methods, the choice of learning models, learning rates and learning modes, and the adoption of data mining algorithms. Alternately, the platform level complexity increases due to mobility and limited availability of computational and battery power resources in mobile devices, high coupling between application components, and dependency over Internet connections. Considering the complexity factors, existing literature proposes static execution models for MDSM applications. The execution models are based on either standalone mobile devices, mobile-to-mobile, mobile-to-edge, or mobile-to-cloud communication models. This thesis presents the novel architecture which utilizes far-edge mobile devices as a primary execution platform for MDSM applications. At the secondary level, the architecture executes MDSM applications by enabling direct communication among nearer mobile devices through localWi-Fi routers without connecting to the Internet. At tertiary level, the architecture enables far-edge to cloud communication in case of unavailability of onboard computational and battery power resources and in the absence of any other mobile devices in the locality. This thesis also presents the dynamic and adaptive execution models in order to handle the complexity at application and platform levels. The dynamic execution model facilitates the data-intensive MDSM applications having low computational complexity. However, the adaptive execution model facilitates in seamless execution of MDSM applications having low data-intensity but high computational complexities. Multiple evaluation methods were used in order to verify and validate the performance of proposed architecture and execution models. The validation and verification of the proposed architecture were performed using High-Level Petri Nets (HLPN) and Z3 Solver. The simulation results revealed that all states in the HLPN model were reachable and the overall design presented a workable solution. However, proposed architecture faced the state explosion problem wherein conventional static execution models fail because the system may enter in multiple states of execution from a single state. The proposed dynamic and adaptive execution models help address the issue of the state explosion problem. To this end, the proposed execution models were tested with multiple MDSM applications mapping to a real-world use-case for activity detection using MECC systems. The experimental evaluation was made in terms of battery power consumption, memory utilization, makespan, accuracy, and the amount of data reduced in mobile devices. The comparison showed that proposed dynamic and adaptive execution models outperformed the static execution models in multiple aspects.
Chapter
As a new computing mode, mobile edge computing (MEC) can use wireless access network to provide user with required services and computing power nearby, thus improve users’ service experience. Mobile devices can offload computation-intensive tasks to MEC for computing. MEC can greatly reduce the energy consumption of mobile devices while also extending their battery life. However, task assignment based on MEC becomes more difficult due to the uncertainty of task arrival and the highly dynamic state of wireless channel. In this chapter, CPU cycle frequency and task allocation are both considered. Our goal is to keep the task queue stable while minimizing energy consumption. Then, we describe the minimization problem as a stochastic optimization problem, and transform it into a deterministic optimization problem by using Lyapunov optimization method. Then, the transformed problem is decoupled into two sub-problems. Then, a Computation Offloading and Frequency Scaling for Energy Efficiency (COFSEE) algorithm for online task offloading and frequency transformation is proposed. Our algorithm can solve optimal subproblems in parallel. We evaluate the performance of COFSEE algorithm through experiments. Experimental results show that the energy consumption of COFSEE is about 15% lower than that of RLE algorithm and 38% lower than that of RME algorithm. Through experiments, we verify the effectiveness of COFSEE algorithm.
Chapter
With the resource-constrained nature of mobile devices, and the resource-abundant offerings of the cloud, several promising optimization techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource-/computation-intensive tasks from mobile devices to the cloud. Most of the existing offloading techniques can only be applied to legacy mobile applications as they are motivated by existing systems. Consequently, they are realized with custom runtimes, which incurs overhead on the application. Moreover, existing approaches which can be applied to the software development phase are difficult to implement (based on manual process) and also fall short of overall (mobile to cloud) efficiency in software quality attributes or awareness of full-tier (mobile to cloud) implications.
Article
The rapid evolution of mobile devices, their applications, and the amount of data generated by them causes a significant increase in bandwidth consumption and congestions in the network core. Edge Computing offers a solution to these performance drawbacks by extending the cloud paradigm to the edge of the network using capable nodes of processing compute-intensive tasks. In the recent years, vehicular edge computing has emerged for supporting mobile applications. Such paradigm relies on vehicles as edge node devices for providing storage, computation, and bandwidth resources for resource-constrained mobile applications. In this article, we study the challenges of computation offloading for vehicular edge computing. We propose a new classification for the better understanding of the literature designing vehicular edge computing. We propose a taxonomy to classify partitioning solutions in filter-based and automatic techniques; scheduling is separated in adaptive, social-based, and deadline-sensitive methods, and finally data retrieval is organized in secure, distance, mobility prediction, and social-based procedures. By reviewing and analyzing literature, we found that vehicular edge computing is feasible and a viable option to address the increasing volume of data traffic. Moreover, we discuss the open challenges and future directions that must be addressed towards efficient and effective computation offloading and retrieval from mobile users to vehicular edge computing.
Article
With the resource-constrained nature of mobile devices and the resource-abundant offerings of the cloud, several promising optimisation techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource intensive tasks from mobile devices to the cloud. Although these schemes address similar questions within the same domain of mobile cloud application (MCA) optimisation, evaluation is tailored to the scheme and also solely mobile focused, thus making it difficult to clearly compare with other existing counterparts. In this work, we first analyse the existing/commonly adopted evaluation technique, then with the aim to fill the above gap, we propose the behaviour-driven full-tier green evaluation approach, which adopts the behaviour-driven concept for evaluating MCA performance and energy usage—ie, green metrics. To automate the evaluation process, we also present and evaluate the effectiveness of a resultant application program interface and tool driven by the behaviour-driven full-tier green evaluation approach. The application program interface is based on Android and has been validated with Elastic Compute Cloud instance. Experiments show that Beftigre is capable of providing a more distinctive, comparable, and reliable green test results for MCAs.